When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

  • n0m4n@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    If this were some fiction plot, Copilot reasoned the plot twist, and ran with it. Instead of the butler, the writer did it. To the computer, these are about the same.

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    3 months ago

    The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      3 months ago

      the AI “decided” in the same way the dice “decided” to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.

      Yes, if the intent is confusion, it is pretty minipulative.

      • gcheliotis@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.

        • Hello Hotel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says “chatGPT + Boston Dynamics robot = Robocop”. Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn’t be human (or even be need to be human).

    • stingpie@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      No, you’re thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type ‘tuttle’ instead of ‘buttle’.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        It’s not my fault that Buttle’s heart condition didn’t appear on Tuttle’s file!

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    7
    arrow-down
    15
    ·
    3 months ago

    These are not hallucinations whatever thay is supposed to mean lol

    Tool is working as intended and getting wrong answers due to how it works. His name frequently had these words around it online so AI told the story it was trained. It doesn’t understand context. I am sure you can also it clearify questions and it will admit it is wrong and correct itself…

    AI🤡

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          3 months ago

          The models are not wrong. The models are nothing but a statistical model that’s really good at predicting the next word that is likely to follow base on prior information given. It doesn’t have understanding of the context of the words, just that statistically they’re likely to follow. As such, all LLM outputs are correct to their design.

          The users’ assumption/expectation of the output being factual is what is wrong. Hallucination is a fancy word in attempt make the users not feel as upset when the output passage doesn’t match their assumption/expectation.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            The users’ assumption/expectation of the output being factual is what is wrong.

            So randomly spewing out bullshit is the actual design goal of AI models? Why does it exist at all?

            • ApexHunter@lemmy.ml
              link
              fedilink
              English
              arrow-up
              5
              ·
              3 months ago

              They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

              Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

              At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 months ago

                They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

                That it generates correct answers > 50% of the time is actually quite a marvel.

                So good as a translator as long as accuracy doesn’t matter?

              • chiisana@lemmy.chiisana.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 months ago

                If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

                The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

                Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M

      • mindlesscrollyparrot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Sure, but which of these factors do you think were relevant to the case in the article? The AI seems to have had a large corpus of documents relating to the reporter. Those articles presumably stated clearly that he was the reporter and not the defendant. We are left with “incorrect assumptions made by the model”. What kind of assumption would that be?

        In fact, all of the results are hallucinations. It’s just that some of them happen to be good answers and others are not. Instead of labelling the bad answers as hallucinations, we should be labelling the good ones as confirmation bias.

        • femtech@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          It was an incorrect assumption based on his name being in the article. It should have listed him as the author only, not a part of the cases.

      • EpeeGnome@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 months ago

        Yes, hallucination is the now standard term for this, but it’s a complete misnomer. A hallucination is when something that does not actually exist is perceived as if it were real. LLMs do not perceive, and therefor can’t hallucinate. I know, the word is stuck now and fighting against it is like trying to bail out the tide, but it really annoys me and I refuse to use it. The phenomenon would better be described as a confabulation.

  • tiramichu@lemm.ee
    link
    fedilink
    English
    arrow-up
    28
    ·
    3 months ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        Someone, somewhere along the line, almost certainly coded rate(2025) = 2*rate(2024). And someone approved that going into production.

  • Broken@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    3 months ago

    This sounds like a great movie.

    AI sends police after him because of things he wrote. Writer is on the run, trying to clear his name the entire time. Somehow gets to broadcast the source of the articles to the world to clear his name. Plot twist ending is that he was indeed the perpetrator behind all the crimes.

  • erenkoylu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    9
    ·
    3 months ago

    The problem is not the AI. The problem is the huge numbers of morons who deploy AI without proper verfication and control.

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Yeah, just like the thousands or millions of failed IT projects. AI is just a new weapon you can use to shoot yourself in the foot.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      3 months ago

      Sure, and also people using it without knowing that it’s glorifies text completion. It finds patterns, and that’s mostly it. If your task involves pattern recognition then it’s a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.

    • Zeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Not really. The purpose of the transformer architecture was to get around this limitation through the use of attention heads. Copilot or any other modern LLM has this capability.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        The llm does not give you the next token. It gives you a probability distribution of what the next token coould be. Then, after the llm, that probability distribution is randomly sampled.

        You could add billions of attention heads, it will still have an element of randomness in the end. Copilot or any other llm (past, present or future) do have this problem too. They all “hallucinate” (have a random element in choosing the next token)

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          randomly sampled.

          Semi-randomly. There’s a lot of sampling strategies. For example temperature, top-K, top-p, min-p, mirostat, repetition penalty, greedy…

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Semi-randomly

            A more correct term is constrained randomness. You’re still looking at probability distribution functions, but they’re more complex than just a throw of the dice.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            randomly doesn’t mean equiprobable. If you’re sampling a probability distribution, it’s random. Temperature 0 is never used, otherwise a lot of stuff would consistently hallucinate the exact same thing

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Temperature 0 is never used

              It is in some cases, where you want a deterministic / “best” response. Seen it used in benchmarks, or when doing some “Is this comment X?” where X is positive, negative, spam, and so on. You don’t want the model to get creative there, but rather answer consistently and always the most likely path.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      32
      ·
      3 months ago

      It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

      It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        21
        ·
        3 months ago

        Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          3
          ·
          3 months ago

          If you already know the answer you can tell the AI the answer as part of the question and it’ll give you the right answer.

          That’s what you sound like.

          AI people are as annoying as the Musk crowd.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            20
            ·
            3 months ago

            How helpful of you to tell me what I’m saying, especially when you reframe my argument to support yourself.

            That’s not what I said. Why would you even think that’s what I said.

            Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

            Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            21
            ·
            3 months ago

            You know what, don’t bother responding back to me I’m just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I’m not going to waste my time and responding to anything or reading anything from you ever again.

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            I’m no AI fanboy, but what you just described was the feedback cycle during training.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          3 months ago

          the problem isn’t being pro ai. It’s people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can’t get an 8-ball to give the correct answer consistently. Because it’s fundamentally random.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        3 months ago

        also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

        • linearchaos@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          16
          ·
          3 months ago

          This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            15
            arrow-down
            1
            ·
            edit-2
            3 months ago

            yes it is, and it doesn’t work.

            edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                7
                ·
                3 months ago

                from their own site:

                Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

            • Rivalarrival@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              edit-2
              3 months ago

              It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for it’s partner’s responses.

              It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from it’s partners, and it is instructed to minimize such feedback.

              It is not (yet) developing true intelligence. It is simply learning to bias it’s responses in such a way that it’s audience doesn’t immediately call it a liar.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                9
                ·
                3 months ago

                Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

                • Rivalarrival@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  7
                  ·
                  3 months ago

                  What other networks?

                  It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        3 months ago

        The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

    • wintermute@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      47
      ·
      3 months ago

      Exactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.

      Of course this is oversimplified, but that’s the main idea.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        3 months ago

        no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 months ago

    Oh, this would be funny if people en masse were smart enough to understand the problems with generative ai. But, because there are people out there like that one dude threatening to sue Mutahar (quoted as saying “ChatGPT understands the law”), this has to be a problem.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      3 months ago

      And to help educate the ignorant masses:

      Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other and when optimized: simultaneously.

      The reason that it used the reporter’s name as the culprit is because out of the names in the sample data his name appeared at or near the top of the list of frequent names so it was statistically likely to be the next name mentioned.

      AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements. It is a math problem, statistics.

      There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we’re at that corner now.

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 months ago

        There are actual science fiction stories built on the premise that AI reporting on the start of Nuclear War resulted in actual kickoff of the apocalypse, and we’re at that corner now.

        IIRC, this was the running theory in Fallout until the show.

        Edit: I may be misremembering, it may have just been something similar.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          3 months ago

          I haven’t played the original series but in 3 and 4 it was pretty much confirmed the big companies like BlamCo! intentionally set things in motion, but also that Chinese nuclear vessels were already in place near America.

          Ironically, Vault Tech wasn’t planning to ever actually use their vaults for anything except human expirimentation so they might have been out of the loop.

          • Ganbat@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            3 months ago

            Yeah, it’s kinda been all over the place, but that’s where the show ended up going, except Vault Tech was very much in the loop. I can’t get spoiler tags to work, so I’ll leave out the details.

            What I’m thinking of, though, was also in Fallout 4. I’ve been thinking on it, and I remember now that what I’m thinking of is that it’s implied that the AI from the Railroad quests fed fake info about incoming missiles to force America to fire. I still don’t remember any specifics, though, and I could be misremembering. It’s been a good few years after all, lol.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        3 months ago

        That’s not quite true. Ai’s are not just analyzing the possible next word they are using complex mathematical operations to calculate the next word it’s not just the next one that’s most possible it’s the net one that’s most likely given the input.

        No trouble is that the AIs are only as smart as their algorithms and Google’s AI seems to be really goddamn stupid.

        Point is they’re not all made equal some of them are actually quite impressive although you are correct none of them are actually intelligent.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          nOt JUsT anAlYzInG thE NeXT wOrD

          Poor use of terms. AI does not analyze. It does not think, or decode, or even parse things. It gets fed sample data and when given a prompt (half a form) it uses statistical algorithm to finish the other half.

          All of the algorithms are stupid, they will all hallucinate and say the wrong things. You can add more corrective layers like OpenAI has but you’ll only be closer to the sample data. 95% accurate. 98%. 99%. It doesn’t matter, it’s always stuck just below average human competency for questions already asked countless times, and completely worthless for anything that requires actual independent thought.

      • WldFyre@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other

        Is this true? I know that’s how Marcov chains work, but I thought neural nets worked differently with larger tokens.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          3 months ago

          The only difference between a generic old fashioned word salad generator and GPT4 is the scale. You put multiple layers correcting for different factors on it and suddenly your Language Model turns into a Large Language Model.

          So basically your large tokens are made up of smaller tokens, but its still just statistical approximation of the sample data with little to no emergent behavior or even memory of what its saying as it says it.

          It also exponentially increases power requirements, as the world is figuring out.

          • WldFyre@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            3 months ago

            I don’t disagree, I was just pointing out that “each word is generated independently of each other” isn’t strictly accurate for LLM’s.

            It’s part of the reason they are so convincing to some people, they are able to hold threads semi-coherently throughout entire essay length paragraphs without obvious internal lapses of logic.

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              3 months ago

              I think you’re seeing coherence where there is none.

              Ask it to solve the riddle about the fox the chicken and the grains.

              Even if it does solve the riddle without blurting out random nonsense, that’s just because the sample data solved the riddle billions of times before.

              It’s just guessing words.

              • WldFyre@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                3 months ago

                I think you’re seeing coherence where there is none.

                Ask it to solve the riddle about the fox the chicken and the grains.

                I think it getting tripped up on riddles that people often fail or it not getting factual things correct isn’t as important for “believability”, which is probably a word closer to what I meant than “coherence.”

                No one was worried about misinformation coming from r/SubredditSimulator, for example, because Marcov chains have much much less believability. “Just guessing words” is a bit of a over-simplification for neural nets, which are a powerful technology even if the utility of turning it towards language is debatable.

                And if LLM’s weren’t so believable we wouldn’t be having so many discussions about the misinformation or misuse they could cause. I don’t think we’re disagreeing I’m just trying to add more detail to your “each word is generated independently” quote, which is patently wrong and detracts from your overall point.

                • finitebanjo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  lmao yeh bro such a hard riddle totally

                  I concede. AI has a superintelligient brain and I’m just so jealous. You have permission to whip me into submission.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 months ago

        AI have no concepts, period. It doesn’t know what a person is, or what the laws are. It generates word salad that approximates human statements.

        This isn’t quite accurate. LLMs semantically group words and have a sort of internal model of concepts and how different words relate to them. It’s still not that of a human and certainly does not “understand” what it’s saying.

        I get that everyone’s on the “shit on AI train”, and it’s rightfully deserved in many ways, but you’re grossly oversimplifying. That said, way too many people do give LLMs too much credit and think it’s effectively magic. Reality, as is usually the case, is somewhere in the middle.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Jfc you dudes really piss me of with these contrarian rants, piss off it takes power and makes sophisticated word salads.

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 months ago

            Oh, my bad, I thought the point of discussion boards was to have a discussion…

            If your only goal is to spout misinformation and stick your fingers in your ears, I’ll go somewhere else.

  • Brutticus@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    ·
    3 months ago

    “This guys name keeps showing up all over this case file” “Thats because he’s the victim!”

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    8
    ·
    3 months ago

    It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

    • stingpie@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      I don’t think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer’s rigid and unthinking associations better.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      20
      ·
      3 months ago

      You forgot the ever important asterisk of “yet”.

      Artificial General Intelligence (“Real AI”) is all but guaranteed to be possible. Because that’s what humans are. Get a deep enough understanding of humans, and you will be able to replicate what makes us think.

      Barring that, there are other avenues for AGI. LLMs aren’t one of them, to be clear.

      • Ð Greıt Þu̇mpkin@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 months ago

        I actually don’t think a fully artificial human like mind will ever be built outside of novelty purely because we ventured down the path of binary computing.

        Great for mass calculation but horrible for the kinds of complex pattern recognitions that the human mind excels at.

        The singularity point isn’t going to be the matrix or skynet or AM, it’s going to be the first quantum device successfully implanted and integrated into a human mind as a high speed calculation sidegrade “Third Hemisphere.”

        Someone capable of seamlessly balancing between human pattern recognition abilities and emotional intelligence while also capable of performing near instant multiplication of matrices of 100 entries of length in 15 dimensions.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        is all but guaranteed to be possible

        It’s more correct to say it “is not provably impossible.”

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          The human brain works. Even if we are talking about wetware 1k years in our future, that would still mean is possible.

  • kent_eh@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    3 months ago

    Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers.

    Stephen King is going to be in big trouble if these AI thingies notice him.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      Praise Stephen Tak King! Glory to the Unformed Heart!

      Tak!

      Wan Tak! Can Tak!

      Tak! Ah lah!

      Him en tow!

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    6
    ·
    3 months ago

    It’s frustrating that the article deals treats the problem like the mistake was including Martin’s name in the data set, and muses that that part isn’t fixable.

    Martin’s name is a natural feature of the data set, but when they should be taking about fixing the AI model to stop hallucinations or allow humans to correct them, it seems the only fix is to censor the incorrect AI response, which gives the implication that it was saying something true but salacious.

    Most of these problems would go away if AI vendors exposed the reasoning chain instead of treating their bugs as trade secrets.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        https://learnprompting.org/docs/intermediate/chain_of_thought

        It’s suspected to be one of the reasons why Claude and OpenAI’s new o1 model is so good at reasoning compared to other llm’s.

        It can sometimes notice hallucinations and adjust itself, but there’s also been examples where the CoT reasoning itself introduce hallucinations and makes it throw away correct answers. So it’s not perfect. Overall a big improvement though.

    • 100@fedia.io
      link
      fedilink
      arrow-up
      19
      arrow-down
      4
      ·
      3 months ago

      just shows that these “ai”'s are completely useless at what they are trained for

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        3 months ago

        They’re trained for generating text, not factual accuracy. And they’re very good at it.