Half of LLM users (49%) think the models they use are smarter than they are, including 26% who think their LLMs are “a lot smarter.” Another 18% think LLMs are as smart as they are. Here are some of the other attributes they see:

  • Confident: 57% say the main LLM they use seems to act in a confident way.
  • Reasoning: 39% say the main LLM they use shows the capacity to think and reason at least some of the time.
  • Sense of humor: 32% say their main LLM seems to have a sense of humor.
  • Morals: 25% say their main model acts like it makes moral judgments about right and wrong at least sometimes. Sarcasm: 17% say their prime LLM seems to respond sarcastically.
  • Sad: 11% say the main model they use seems to express sadness, while 24% say that model also expresses hope.
  • DeusUmbra@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    11 hours ago

    Remember that 54% of adults in American cannot read beyond a 6th grade level, with 21% being fully illiterate.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Maybe if the adults actually didn’t use the LLMs so much this wouldn’t be the case.

  • Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    33
    ·
    2 days ago

    LLMs are made to mimic how we speak, and some can even pass the Turing test, so I’m not surprised that people who don’t know better think of these LLMs as conscious in some way or another.

    It’s not a necessarily a fault on those people, it’s a fault on how LLMs are purposefully misadvertised to the masses

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 days ago

    An llm simply has remembered facts. If that is smart, then sure, no human can compete.

    Now ask an llm to build a house. Oh shit, no legs and cant walk. A human can walk without thinking about it even.

    In the future though, there will be robots who can build houses using AI models to learn from. But not in a long time.

    • Omgpwnies@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      3d-printed concrete houses are already a thing, there’s no need for human-like machines to build stuff. They can be purpose-built to perform whatever portion of the house-building task they need to do. There’s absolutely no barrier today from having a hive of machines built for specific purposes build houses, besides the fact that no-one as of yet has stitched the necessary components together.

      It’s not at all out of the question that an AI can be trained up on a dataset of engineering diagrams, house layouts, materials, and construction methods, with subordinate AIs trained on the specific aspects of housing systems like insulation, roofing, plumbing, framing, electrical, etc. which are then used to drive the actual machines building the house. The principal human requirement at that point would be the need for engineers to check the math and sign-off on a design for safety purposes.

      • WagyuSneakers@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        If you trained it on all of that it wouldn’t be a good builder. Actual builders would tell you it’s bad and you would ignore them.

        LLMs do not give you accurate results. They can simply strong along words into coherent sentences and that’s the extent of their capacity. They just agree with whatever the prompter is pushing and it makes simple people think it’s smart.

        AI will not be building you a house unless you count a 3D printed house and we both know that’s overly pedantic. If that were the case a music box from 1780 is an AI.

    • samus12345@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      No. People think things that aren’t smarter than them are all the time.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 hours ago

    I’m surprised it’s not way more than half. Almost every subjective thing I read about LLMs oversimplifies how they work and hugely overstates their capabilities.

  • forrcaho@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    As far as I can tell from the article, the definition of “smarter” was left to the respondents, and “answers as if it knows many things that I don’t know” is certainly a reasonable definition – even if you understand that, technically speaking, an LLM doesn’t know anything.

    As an example, I used ChatGPT just now to help me compose this post, and the answer it gave me seemed pretty “smart”:

    what’s a good word to describe the people in a poll who answer the questions? I didn’t want to use “subjects” because that could get confused with the topics covered in the poll.

    “Respondents” is a good choice. It clearly refers to the people answering the questions without ambiguity.

    The poll is interesting for the other stats it provides, but all the snark about these people being dumber than LLMs is just silly.