Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    13 days ago

    We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.

    • Rob T Firefly@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      edit-2
      13 days ago

      LLMs are not children. Children can have experiences, learn things, know things, and grow. Spicy autocomplete will never actually do any of these things.

        • Rob T Firefly@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          13 days ago

          Our microorganism ancestors also did all those things, and they were far beyond anything an LLM can do. Turning words into numbers, doing a string of math to those numbers, and turning the resulting numbers back into words is not consciousness or wisdom and never will be.

          • TankovayaDiviziya@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            13 days ago

            You think microorganisms can reason? Wow, AI haters are grasping for straws.

            Honestly, I don’t understand Lemmy scoffing at AI and thinking the current iteration is all it ever will be. I’m sure no one thought that the automobile technology would go anywhere simply because the first model was running at 3mph. These things always takes time.

            To be clear, I’m not endorsing AI, but I think there is a huge potential in years to come, for better or worse. And it is especially important to never underestimate something, especially by AI haters, because of what destructive potential AI has.

          • plyth@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            13 days ago

            Turning a given list of words into numbers, doing a string of math to those numbers, and turning the resulting numbers back into words is not consciousness or wisdom and never will be.

            Neither is moving electrolytes around fat barriers.

            • TankovayaDiviziya@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              13 days ago

              I think given how a substantial number of users in Lemmy are old, I think there is simply a natural aversion to the new and grasping for straws. I never hear of younger folks with IT background dismiss AI completely, as much as Lemmy does. I’m not a fan of AI, especially how company shove AI to us, but to dismiss that it won’t evolve and improve is a ridiculous position to me.

        • herrvogel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 days ago

          LLMs can’t learn. It’s one of their inherent properties that they are literally incapable of learning. You can train a new model, but you can’t teach new things to an already trained one. All you can do is adjust its behavior a little bit. That creates an extremely expensive cycle where you just have to spend insane amounts of energy to keep training better models over and over and over again. And the wall of diminishing returns on that has already been smashed into. That, and the fact that they simply don’t have concepts like logic and reasoning and knowing, puts a rather hard limit on their potential. It’s gonna take several sizeable breakthroughs to make LLMs noticeably better than they are now.

          There might be another kind of AI that solves those problems inherent to LLMs, but at present that is pure sci-fi.

      • enumerator4829@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        I started experimenting with the spice the past week. Went ahead and tried to vibe code a small toy project in C++. It’s weird. I’ve got some experience teaching programming, this is exactly like teaching beginners - except that the syntax is almost flawless and it writes fast. The reasoning and design capabilities on the other hand - ”like a child” is actually an apt description.

        I don’t really know what to think yet. The ability to automate refactoring across a project in a more ”free” way than an IDE is kinda nice. While I enjoy programming, data structures and algorithms, I kinda get bored at the ”write code”-part, so really spicy autocomplete is getting me far more progress than usual for my hobby projects so far.

        On the other hand, holy spaghetti monster, the code you get if you let it run free. All the people prompting based on what feature they want the thing to add will create absolutely horrible piles of garbage. On the other hand, if I prompt with a decent specification of the code I want, I get code somewhat close to what I want, and given an iteration or two I’m usually fairly happy. I think I can get used to the spicy autocomplete.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      13 days ago

      We have already thrown just about all the Internet and then some at them. It shows that LLMs can not think or reason. Which isn’t surprising, they weren’t meant to.

      • eronth@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        13 days ago

        Or at least they can’t reason the way we do about our physical world.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          4
          ·
          13 days ago

          No, they cannot reason, by any definition of the word. LLMs are statistics-based autocomplete tools. They don’t understand what they generate, they’re just really good at guessing how words should be strung together based on complicated statistics.

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            12 days ago

            You seem pretty sure of that. Is your position firm or are you willing to consider contrary evidence?

            Definition: https://www.wordnik.com/words/reasoning

            • Evidence or arguments used in thinking or argumentation.

            • The deduction of inferences or interpretations from premises; abstract thought; ratiocination.

            Evidence: https://lemmy.world/post/43503268/22326378

            I believe this clearly shows the LLM can perform something functionally equivalent to deductive reasoning when given clear premises.

            “Auto-complete” is lazy framing. A calculator is “just” voltage differentials on silicon. That description is true and also tells you nothing useful about whether it’s doing arithmetic.

            The question of whether something is or isn’t reasoning isn’t answered by describing what it runs on; it’s answered by looking at whether it exhibits the structural properties of reasoning: consistency across novel inputs, correct application of inference rules, sensitivity to logical relationships between premises. I think the above example shows something in that direction. YMMV.

            • zalgotext@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 days ago

              I can be convinced by contrary evidence if provided. There is no evidence of reasoning in the example you linked. All that proved was that if you prime an LLM with sufficient context, it’s better at generating output, which is honestly just more support for calling them statistical auto-complete tools. Try asking it those same questions without feeding it your rules first, and I bet it doesn’t generate the right answers. Try asking it those questions 100 times after feeding it the rules, I bet it’ll generate the wrong answers a few times.

              If LLMs are truly capable of reasoning, it shouldn’t need your 16 very specific rules on “arithmetic with extra steps” to get your very carefully worded questions correct. Your questions shouldn’t need to be carefully worded. They shouldn’t get tripped up by trivial “trick questions” like the original one in the post, or any of the dozens of other questions like it that LLMs have proven incapable of answering on their own. The fact that all of those things do happen supports my claim that they do not reason, or think, or understand - they simply generate output based on their input and internal statistical calculations.

              LLMs are like the Wizard of Oz. From afar, they look like these powerful, all-knowing things. The speak confidently and convincingly, and are sometimes even correct! But once you get up close and peek behind the curtain, you realize that it’s just some complicated math, clever programming, and a bunch of pirated books back there.

              • SuspciousCarrot78@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                12 days ago

                Ok, I’ll take that in good faith and respond in kind.

                “It needed the rules, therefore it’s not reasoning” is your weakest point, and it’s load-bearing for your whole argument.

                Every reasoning system needs premises - you, me, an 8yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

                Look at what the rules actually were: https://pastes.io/rules-a-ph

                No numbers, containers, or scenarios. Just abstract invariants about bounded systems — the kind of thing you’d write in formal logic. Most aren’t even physics facts, they’re logical constraints: stated quantity vs physically present quantity, overflow vs contained volume. Premises in the strict logical sense.

                When the LLM correctly handles novel chained problems, including the 4oz cup already holding 3oz, tracking state across two operations, that’s deriving conclusions from general premises applied to novel instances. That’s what deductive reasoning is, per the definition I cited.

                “Without the rules it fails” - without context, humans make the same errors. Default assumptions under uncertainty aren’t a failure of reasoning, they’re a feature of any mind with incomplete information.

                “It’ll fail sometimes across 100 runs” - so do humans under load. Probabilistic performance doesn’t disqualify a process from being reasoning. It just makes it imperfect reasoning, which is the only kind that exists.

                The Wizard of Oz analogy is vivid but does no logical work. “Complicated math and clever programming” describes implementation, not function. Your neurons are electrochemical signals on evolved heuristics. If that rules out reasoning, it rules out all reasoning everywhere. If it doesn’t rule out yours, you need a principled account of why it rules out the LLM’s.

                • zalgotext@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 days ago

                  It needed the rules, and it needed carefully worded questions that matched the parameters set by the rules. I bet if the questions’ wording didn’t match your rules so exactly, it would generate worse answers. Heck, I bet if you gave it the rules, then asked several completely unrelated questions, then asked it your carefully worded rules-based questions, it would perform worse, because it’s context window would be muddied. Because that’s what it’s generating responses based on - the contents of it’s context window, coupled with stats-based word generation.

                  I still maintain that it shouldn’t need the rules if it’s truly reasoning though. LLMs train on a massive set of data, surely the information required to reason out the answers to your container questions is in there. Surely if it can reason, it should be able to generate answers to simple logical puzzles without someone putting most of the pieces together for them first.

                  • SuspciousCarrot78@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    11 days ago

                    Ok, happy to play ball on that.

                    Replying in specific: “Carefully worded questions”; clear communication isn’t cheating. You’d mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

                    Re: worse answers. Tell you what. I’m happy to yeet some unrelated questions at it if you’d like and let’s see what it does. My setup isn’t bog standard - what’ll likely happen is it’ll say “this question isn’t grounded in the facts given, so I’ll answer from my prior knowledge.” I designed my system to either answer it of fail loudly.

                    Want to give it a shot? I’ll ground it just to those facts, fair and square. Throw me a question and we’ll see what happens. Deal?

                    The context window point is interesting and probably partially true. But working memory interference affects humans too. It’s just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM with 2 cups of coffee.

                    The training data argument is the most interesting thing you’ve said, but I think you’re arguing my point for me. You’re acknowledging the model has absorbed the relevant knowledge - you’re just objecting that it needed activating explicitly.

                    That’s just priming the pump. You don’t sit an exam without reviewing the material first. Activating relevant knowledge before a task isn’t a workaround for reasoning, it’s a precondition for it.

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          13 days ago

          You’re failing into the same trap. When the letters on the screen tell you something, it’s not necessarily the truth. When there is “I’m reasoning” written in a chatbot window, it doesn’t mean that there is a something that’s reasoning.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 days ago

        By now it’s kind of getting clear that fundamentally it’s the best version of the thing that we get. This is a primetime.
        For some time, there was a legit question of “if we give it enough data, will there be a qualitative jump”, and as far as we can see right now, we’re way past this jump. Predictive algorithm can form grammatically correct sentences that are related to the context. That’s it, that’s the jump.
        Now a bunch of salespeople are trying to convince us that if there was one jump, there necessarily will be others, while there is no real indication of that.