Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • zalgotext@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.

    If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.

    LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 days ago

      Ok, I’ll take that in good faith and respond in kind.

      “It needed the rules, therefore it’s not reasoning” is your weakest point, and it’s load-bearing for your whole argument.

      Every reasoning system needs premises - you, me, an 8yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

      Look at what the rules actually were: https://pastes.io/rules-a-ph

      No numbers, containers, or scenarios. Just abstract invariants about bounded systems — the kind of thing you’d write in formal logic. Most aren’t even physics facts, they’re logical constraints: stated quantity vs physically present quantity, overflow vs contained volume. Premises in the strict logical sense.

      When the LLM correctly handles novel chained problems, including the 4oz cup already holding 3oz, tracking state across two operations, that’s deriving conclusions from general premises applied to novel instances. That’s what deductive reasoning is, per the definition I cited.

      “Without the rules it fails” - without context, humans make the same errors. Default assumptions under uncertainty aren’t a failure of reasoning, they’re a feature of any mind with incomplete information.

      “It’ll fail sometimes across 100 runs” - so do humans under load. Probabilistic performance doesn’t disqualify a process from being reasoning. It just makes it imperfect reasoning, which is the only kind that exists.

      The Wizard of Oz analogy is vivid but does no logical work. “Complicated math and clever programming” describes implementation, not function. Your neurons are electrochemical signals on evolved heuristics. If that rules out reasoning, it rules out all reasoning everywhere. If it doesn’t rule out yours, you need a principled account of why it rules out the LLM’s.

      • zalgotext@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        It needed the rules, and it needed carefully worded questions that matched the parameters set by the rules. I bet if the questions’ wording didn’t match your rules so exactly, it would generate worse answers. Heck, I bet if you gave it the rules, then asked several completely unrelated questions, then asked it your carefully worded rules-based questions, it would perform worse, because it’s context window would be muddied. Because that’s what it’s generating responses based on - the contents of it’s context window, coupled with stats-based word generation.

        I still maintain that it shouldn’t need the rules if it’s truly reasoning though. LLMs train on a massive set of data, surely the information required to reason out the answers to your container questions is in there. Surely if it can reason, it should be able to generate answers to simple logical puzzles without someone putting most of the pieces together for them first.

        • SuspciousCarrot78@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 days ago

          Ok, happy to play ball on that.

          Replying in specific: “Carefully worded questions”; clear communication isn’t cheating. You’d mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

          Re: worse answers. Tell you what. I’m happy to yeet some unrelated questions at it if you’d like and let’s see what it does. My setup isn’t bog standard - what’ll likely happen is it’ll say “this question isn’t grounded in the facts given, so I’ll answer from my prior knowledge.” I designed my system to either answer it of fail loudly.

          Want to give it a shot? I’ll ground it just to those facts, fair and square. Throw me a question and we’ll see what happens. Deal?

          The context window point is interesting and probably partially true. But working memory interference affects humans too. It’s just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM with 2 cups of coffee.

          The training data argument is the most interesting thing you’ve said, but I think you’re arguing my point for me. You’re acknowledging the model has absorbed the relevant knowledge - you’re just objecting that it needed activating explicitly.

          That’s just priming the pump. You don’t sit an exam without reviewing the material first. Activating relevant knowledge before a task isn’t a workaround for reasoning, it’s a precondition for it.

            • SuspciousCarrot78@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 days ago

              Cool. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2

              #1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below) #2 “locked” looks for the answer in source; if not available, answers from priors.


              #1 No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.

              Confidence: low | Source: Scratchpad


              #2 [Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]

              Here is a simple scrambled egg recipe: Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.

              Source: Model (not in locked file)

              Confidence: unverified | Source: Model


              For context, provenance footers (not vibes, actual computed states):

              https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-do-confidence-and-source-mean

              #3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output if fail loud

              https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#deep-example

              All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, not leveraging the tool to do what it ACTUAL can do.

              • zalgotext@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                11 days ago

                Yeah your response sounded like it was generated by an LLM, so I had to check. If you think that’s bad faith on my part, idk what to tell you

                • SuspciousCarrot78@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.

                  Of course it’s bad faith. But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire “just spicy auto complete” point.

                  • zalgotext@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    11 days ago

                    You’re not gonna convince me, and I’m not gonna convince you. I’m done with this conversation before you devolve further into personal attacks.