Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)GO
Posts
20
Comments
5
Joined
2 yr. ago
math @lemmy.sdf.org
goosethe @lemmy.sdf.org

yang-mills mass gap

The Ocho @lemmy.sdf.org
goosethe @lemmy.sdf.org

pogo stick high jump

The Ocho @lemmy.sdf.org
goosethe @lemmy.sdf.org

jet pack racing!

  • Without choice, it's impossible to guarantee that an arbitrary product of non-empty sets is non-empty. The Cartesian product would just be a set containing the empty function, which is a singleton set and therefore non-empty. This definition doesn't rely on the Axiom of Choice, so we can prove in ZF¬C that the empty Cartesian product of non-empty sets is non-empty. So, the question of proving that an empty Cartesian product of non-empty sets is non-empty in ZF¬C does not seem to be problematic. This is the same across all standard set theories (ZF, ZF¬C, ZFC), because it's not about selecting an element from each of the sets in the family (which would require AC), it's just about the definition of the Cartesian product when there are no sets in the family.

  • Hopf Fibration: Technically, the base space of the Hopf fibration is a 2-sphere (S^2), but since the 2-sphere can be thought of as a compactified plane or a disk with an added point at infinity, it could count.

    Möbius Strip: The Möbius strip can be thought of as a fiber bundle over the circle S^1 with fibers that are intervals of the real line.

    Twisted Cylinder: Similar to a Möbius strip but with the fibers being open intervals instead of closed loops.

    The Klein Bottle: If you take S^1 as your fiber, the Klein bottle can be seen as a nontrivial fiber bundle over the circle.

    Principal Bundles: The concept of a principal G-bundle, where G is a topological group, is a generalization of fiber bundles. For instance, the frame bundle of a manifold is a principal GL(n,R)-bundle, where GL(n,R) is the general linear group of invertible matrices, and n is the dimension of the manifold. As a more specific example, consider the tangent bundle of a disk, D2. The frame bundle of D2 is a principal GL(2,R)-bundle over the disk.

  • The Ocho @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    high-level air hockey strats

    Speedrun @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Hydro Thunder(Arcade) The Far East 1:19.46

    Speedrun @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Super Mario Odyssey Any% (2 Player) in 56:54

    The Ocho @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Highlights - Cycle Ball - Gold Medal Final | 2021 UCI Indoor Cycling World Championships

    The Ocho @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Underwater rugby

    The Ocho @lemmy.sdf.org
    goosethe @lemmy.sdf.org
    Motorcycles @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Milwaukee Senior TT - Highlights | 2023 Isle of Man TT Races

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    open source math textbooks

    some links are broken but otherwise good. Post your open source math textbooks here

    WorldNews @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    After state board approves first taxpayer-funded Catholic school, Hindus seek same | KGOU

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    some older machine learning books

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    "Prompt Gisting:" Train two models such that given inputs "Translate French

    <G1>

    <G2>

    " and "

    <G1>

    G2>The cat," then G1 and G2 represent the entire instruction.

    arxiv.org Learning to Compress Prompts with Gist Tokens

    Prompting is the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and repeatedly encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs with...

    Learning to Compress Prompts with Gist Tokens

    cross-posted from: https://lemmy.sdf.org/post/36227

    Abstract: "Prompting is now the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and re-encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present gisting, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be reused for compute efficiency. Gist models can be easily trained as part of instruction finetuning via a restricted attention mask that encourages prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, storage savings, and minimal loss in output quality. "

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Taming AI Bots: Prevent LLMs from entering "bad" states using continuous guidance from the LLM ("is this good? bad?") to avoid bad states.

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    The TeX book

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    open source data visualization books

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    The space of homogeneous probability measures on $\overline{\Gamma \backslash X}_{\max}^{S}$ is compact

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    Automorphic number theory

    math @lemmy.sdf.org
    goosethe @lemmy.sdf.org

    NVIDIA's everything 2 anything