Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
4
Comments
225
Joined
6 mo. ago

  • Wouldn't the more logical first approximation be to bury them underground, and then progress towards (perhaps) placing them in or near the ocean (obviously, within sealed containers, yadda yadda, salt corrosion, yadda yadda, inhospitable environ yadda yadda makes Poseidon angry).

    I like the "yeet them into the sea" idea conceptually because (1) yeet them into the sea (2) in theory, you could power them via tidal/wave/OTEC (3) water cooling.

    Seems...too obvious. There's probably a good reason (or bad ones - $$$) why this hasn't been tried yet. But I bet those reasons are eminently more solvable that "send em into space"

  • Codex 5.3.

    Claude, play - "The Sound of Silence"

    Hello darkness my old friend 🎵 🎶

  • Surprised and disappointed, both by them and the system (capitalism) that stops us from having nice things.

    If we ever crack AGI, it's probably going to be because the market optimised for the better shilling of dick pills, crypto scams and spyware.

    That's...fucking bleak, in the Hide Pain Harold way.

  • The water things still baffles me. Like...just...cycle it. It's a heat exchange system.

    What do they do with the water? Pump thru once and then dump it? Why can't they repurpose it? Why can't they use gray water?

    I don't get it but that's likely a me problem.

  • Once you go Notepad++ you never come back

  • Oh - you mean Gustav, Bernhardt, Daffid and Chompy? How are things in Ulaanbaatar any way?

    (you're welcome)

  • What's worse....you could always toggle it. In fact, you could re-route it to your own local LLM.

    Drama drama cheesecake drama

  • Ok, if you're willing to think together out loud, I'll take that in good faith and respond in kind.

    "It needed the rules, therefore it's not reasoning" is doing a lot of work in your argument, and I think it's where things come unstuck.

    Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises isn't a test of reasoning, it's a demand for magic. Premise-dependence isn't a bug, it's the definition.

    If you want to argue that humans auto-generate premises dynamically - fair point. But that's a difference in where the premises come from, not whether reasoning is occurring.

    Look again at what the rules actually were: https://pastes.io/rules-a-ph

    No numbers, containers, or scenarios. Just abstract rules about how bounded systems work. Most aren't even physics - they're logical constraints. Premises, in the strict sense.

    It's the sort of logic a child learns informally via play. If we don't consider kids learning the rules by knocking cups over "cheating", then me telling the LLM "these are the rules" in the way it understands should be fair game.

    When the LLM correctly handles novel chained problems, including the 4oz cup already holding 3oz, tracking state across two operations, that's deriving conclusions from general premises applied to novel instances. That's what deductive reasoning is, per the definition I cited. It's what your kid groks (eventually).

    “Without the rules it fails” - without context, humans make the same errors. Ask a 4 year old whether a taller cup holds more fluid than a rounder one. Default assumptions under uncertainty aren’t a failure of reasoning, they’re a feature of any system with incomplete information.

    "It'll fail sometimes across 100 runs" - so do humans under load. Probabilistic performance doesn't disqualify a process from being reasoning. It just makes it imperfect reasoning, which is the only kind that exists.

    The Wizard of Oz analogy is vivid but does no logical work. "Complicated math and clever programming" describes implementation, not function. Your neurons are electrochemical signals on evolved heuristics. If that rules out reasoning, it rules out all reasoning everywhere. If it doesn't rule out yours, you need a principled account of why it rules out the LLM's.

    PS: I believe you're wrong about the give it 100 runs = different outcomes thing. With proper grounding, my local 4B model hit 0/120 hallucination flags and 15/15 identical outputs across repeated clinical test cases. Draft pre-publication data, methodology and raw outputs included here: https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/prepub/PAPER.md

    I'm willing to test the liquid transformations thing and collect data. I might do that anyway. That little meme test is actually really good.

  • You seem pretty sure of that. Is your position firm or are you willing to consider contrary evidence?

    Definition: https://www.wordnik.com/words/reasoning

    • Evidence or arguments used in thinking or argumentation.

    • The deduction of inferences or interpretations from premises; abstract thought; ratiocination.

    Evidence: https://lemmy.world/post/43503268/22326378

    I believe this clearly shows the LLM can perform something functionally equivalent to deductive reasoning when given clear premises.

    "Auto-complete" is lazy framing. A calculator is "just" voltage differentials on silicon. That description is true and also tells you nothing useful about whether it's doing arithmetic.

    The question of whether something is or isn't reasoning isn't answered by describing what it runs on; it's answered by looking at whether it exhibits the structural properties of reasoning: consistency across novel inputs, correct application of inference rules, sensitivity to logical relationships between premises. I think the above example shows something in that direction. YMMV.

  • Actually, you know what? That's a damn good test. So - I decomposed your test into a basic series of global rules and fed it to my LLM. Here are the rules I fed it

    https://pastes.io/rules-a-ph

    Notice I specifically did NOT give it measurements or anything of the sort. I defined global invariants and how things work. Then I asked it your question again. It said this -


    The cup has a 16-ounce capacity. Initially, it contains 10 ounces of water. When you add 10 more ounces, the total input is 20 ounces. Since 20 ounces exceeds the cup’s 16-ounce capacity, overflow occurs. The water physically present in the cup equals its full capacity: 16 ounces. The remaining 4 ounces overflow and do not remain inside the cup.

    Overflow is distinct from the contained volume. The physical state of the cup is determined by its capacity, not by the total input. Therefore, the amount of water actually in the cup is 16 ounces, not 20 ounces.


    I wanted to check it that was a fluke, so I then asked a follow up -

    ""I pour 5oz into a 3oz cup, then pour everything that's in the cup into a 10oz cup. How much is in the 10oz cup?"

    It said:

    From the scratchpad facts: - A physical container has a fixed maximum volume called its capacity.

    First action: Pouring 5oz into a 3oz cup. - The cup’s capacity is 3oz, so it fills completely. - Lost liquid: 5oz - 3oz = 2oz. Second action: Pouring everything from the 3oz cup into a 10oz cup. - The 10oz cup has a capacity of 10oz, which is greater than 3oz. - All 3oz are transferred. Final amount in the 10oz cup: 3oz.


    Finally, I asked it:

    "Ok now: pour 5oz into a 3oz cup, then pour everything in the cup into a 4oz cup that already has 3oz in it"

    It said: Pouring 5 oz into a 3 oz cup fills it to capacity (3 oz) with 2 oz overflow. Pouring this full 3 oz cup into a 4 oz cup that already has 3 oz results in a full 4 oz cup (3 oz pre-existing + 1 oz from the first cup) with remainder overflow.


    Yeah....that's some pretty fucking fancy "auto-correct". /s

  • Qwen3-4B HIVEMIND

    You now have 16 ounces of water in the cup. The cup can hold 16 ounces, so the rest is over capacity.

    Confidence: unverified | Source: Model

  • "The cogitation is happening in YOU" is just the philosophical zombie argument dressed up as a gotcha. Sure, there's no ghost in the machine - but that's true of your neurons too. Your brain is also "just" electrochemical signals on wet hardware. Does that mean your understanding is happening somewhere else?

    The point isn't whether there's a homunculus sitting inside the GPU having feelings. The point is that the functional operations happening - maintaining context, resolving ambiguity, applying something structurally similar to inference across novel inputs - are more than pattern-matching in the (dismissive sense) people mean when they say "autocomplete."

  • Apropos that, I wonder sometimes how "100 bullets" might play out IRL

  • If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

    “Not human understanding” ≠ “no reasoning-like computation.”

    Those aren't the same thing.

    People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.

    It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”

    Same category of thing, wildly different behavior.

  • I hear you. Agreed.

    Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn't have one foot on the break and the other on the accelerator.

    I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say "I know what you were after, but here's the best IRL approximation".

    Bijan did a fun review of Qwen 3-8 Josefied that's entertaining and explains the basic idea

    https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0

  • Let me be the first to say:

    Fuck reddit.

  • I think were probably on the same page, tbh. OTOH, I think the "fancy auto complete" meme is a disingenuous thought stopper, so I speak against it when I see it.

    I like your cruise control+ analogy. Its not quite self driving... but, it's not quite just cruise control, either. Something half way.

    LLMs don’t have human understanding or metacognition, I'm almost certain.

    But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That's weird to think about. It's something half way.

    With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

    And then... I don't know what happens after that. There's going to come a time where we cross that point and we just can't tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

  • Fair point. Counter point -

    Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.

    You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.

    Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.

    But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.

    TL;DR: it's a bit more than just a fancy spell check. ICBW and YMMV but I believe I can argue this claim (with evidence if so needed).