• Reygle@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 hour ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    WHAT

    • starman2112@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 hour ago

      If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I’m going to sue that someone who took advantage of my son’s fuckwittedness

    • merdaverse@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 minutes ago

      AI psychosis is a thing:

      cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

      It’s not very studied since it’s relatively new.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 minutes ago

        I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      49 minutes ago

      I feel like his father should also slap himself unconscious for raising a fuckwit?

      So, a chatbot grooms somebody into killing himself, and your response is… Blame his father?

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        47 minutes ago

        The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.

        Look I don’t like it but to think Gemini (wrong answer machine) is completely to blame would be madness.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          43 minutes ago

          Uh-huh. Do you have any evidence to back up your beliefs here, or are we just working from the presumption that the parents are always to blame

          • Reygle@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            38 minutes ago

            Did we read the same article? Because I feel like we did not read the same article.

    • SalamenceFury@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      I don’t think this person was a “fuckwit”. AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

      • Reygle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        60 minutes ago

        It’s cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.

  • NewNewAugustEast@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    27 minutes ago

    I would like to see the full transcript.

    How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.

    Context would help a lot. Maybe it will come out in discovery.

    • IronBird@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      46 minutes ago

      especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world

    • SaveTheTuaHawk@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      2 hours ago

      Son of Sam killed people because his dog told him to. Should they have sued Purina?

      America never lets a tragedy go to waste without trying to cash in.

  • teft@piefed.social
    link
    fedilink
    English
    arrow-up
    57
    ·
    5 hours ago

    “At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

    Just remember that these language models are also advising governments and military units.

    Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

    • starman2112@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

      Same reason I keep money in a savings account even though it accrues interest

    • minorkeys@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      3 hours ago

      Al mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.

      • Deacon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    84
    ·
    edit-2
    6 hours ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

    “Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

    Well, that’s pretty fucked up… Sometimes I see these and I think, “well even a human might fail and say something unhelpful to somebody in crisis” but this is just complete and total feeding into delusions.

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game and the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

      • SalamenceFury@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 hour ago

        In every other case of AI bots doing this, the bot will always affirm whatever the person says to it. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.

      • MoffKalast@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 hour ago

        That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.

        That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      60
      ·
      5 hours ago

      It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.

        • lightnsfw@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          39 minutes ago

          I mean if Gemini was responding to some kind of roleplay then yeah it does. Not everyone doing shit with it has mental health problems. Some people are just fucking around.

  • SalamenceFury@piefed.social
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    5 hours ago

    As a neurodivergent person, i’ve noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don’t know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that’s why it’s so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were…

    • Truscape@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 hour ago

      Reading about the ELIZA effect as well is a good way to understand how those who embrace “social norms” can be enamored by machine-generated statements without questioning them at all…

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    6 hours ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

    “Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

    I usually don’t give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 hours ago

    he would need to leave his physical body to join her in the metaverse through a process called “transference.”

    Wait a minute, isn’t that the plot to the game Soma? People sending their “soul” to the digital world through “transference”, and act of immediate suicide after a brain scan.

    • Sanctus@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Sort of, in Soma you are all already uploaded and there are no “humans” walking around anymore. Your perspective changes 3 times I think during play. Really drives home questions on perception and existence. Great game everyone should play it.

      • Crozekiel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Oh, yea, like in the game’s present you are right. I was meaning in the game’s past; where all the humans went and what info you get through the like audio logs or whatever.

        spoiler

        IIRC it was basically a cult thing where a bunch of them were convinced their soul wouldn’t go with their consciousness unless they died during or very shortly after the brain scan that was uploading them to the satellite thingy.

        Guess it should be wrapped in spoiler tags just in case…

        • Sanctus@anarchist.nexus
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 hours ago

          Yeah that was it. I was thinking of the end since that part jyst left me staring blank at the screen processing it for a whole ass minute. God I should replay that

          • Crozekiel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            I’m not sure I’m mentally prepared to replay it. The first time through nearly kicked off an early mid-life crisis. I was waking up in cold sweats having an existential crisis for like a week. Such a good game, but at least in my case, absolutely zero replay-ability. lol

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 hours ago

    This technology was not ready for release, yet they released it.

    They do deserve to be sued, this was negligence.

  • ordnance_qf_17_pounder@reddthat.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    6 hours ago

    Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      5 hours ago

      Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today’s scammers are the stock market’s Magnificent Seven.

  • unnamed1@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    5 hours ago

    This is so wild. The article frames Gemini to be the active part making the guy do things all the time. I cannot imagine how this works without roleplay-prompting and requesting those things from the chatbot. Not that I want to blame the victim and side with Google. It’s obviously dangerous to hand tools with good convincing-capabilities to unstable people. And weapons.

  • I_Has_A_Hat@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    27
    ·
    6 hours ago

    There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

    I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren’t that far from the edge to begin with.

    • Bassman27@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      5 hours ago

      So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?

      • SalamenceFury@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 hours ago

        Here’s the thing, it’s usually normies with no history of mental illness that fall into this kind of stuff. Most of my friends and people I follow on social media who are neurodivergent did experiment with chatbots and they saw a fuckton of red flags on the manner they work and alerted everyone about it, if they didn’t hate it already for essentially stealing artistic output (which in my case was both).

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        Google, of all companies, probably has a better psychological profile of their users than the average doctor. They even offer a public-facing option to disable ads about gambling, alcohol, or pregnancy.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            People who don’t want their family getting suspicious, perhaps. The Target Incident comes to mind.

            Of course, disabling these options doesn’t mean Google stops knowing about mental or physical issues. I’m sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google’s way of looking less creepy to the average person.

      • I_Has_A_Hat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        3 hours ago

        In 1980, John Lennon was shot by a mentally ill man who was convinced to kill Lennon by reading Catcher in the Rye. If he had never read Catcher in the Rye, he most likely wouldn’t have killed John Lennon.

        But it is not the fault of Catcher in the Rye. We don’t ban the book, or call the author irresponsible for writing it, because we recognize that the fault lies in the mental illness of the shooter, and that anything could have set him off.

        The people who kill themselves because an AI Chatbot told them to are mentally ill. It is their mental illness that killed them, not the chatbot. You can make the claim that if it wasn’t for the chatbot, they wouldn’t have gone through with it, but again, you can say the same thing about Catcher in the Rye. Getting rid of the trigger does not remove the mental illness.

        • ToTheGraveMyLove@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          2 hours ago

          That’s a terrible argument. We dont blame the book because Catcher in the Rye didn’t have a conversation with him and tell him to kill John Lennon. That’s the difference.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 hours ago

          If he had never read Catcher in the Rye, he most likely wouldn’t have killed John Lennon.

          Sue Seagram’s!

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        5 hours ago

        It’s not the car manufacturer’s responsibility to guarantee a drunk driver doesn’t plow into others.

        Vulnerable people don’t get to outsource responsibility.

        • Bassman27@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          3 hours ago

          Here’s the thing, there are no safeguards on who can and cannot use ai. There are safeguards to prevent death by drink driving.

          Drink driving is illegal. It still happens but it’s against the law. It’s a deterrent to stop people from driving while intoxicated. I guarantee that if drunk driving were legal there would be exponentially more deaths.

          Ai is being shoved down everyone’s throats on a day to day basis. There are no safeguards, even kids can use it.

          Vulnerable people are victims of big tech for profit.

          You argument is poor

    • JollyG@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 hours ago

      There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

      To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.

      I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.

      • newtraditionalists@kbin.melroy.org
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        This is they key to me. Google and all other ai companies are knowingly engaging in marketing campaigns built on lies. They should be held accountable for that regardless of anything else.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      When people encourage others to murder by feeding delusion they can be held accountable.

      Why are you blaming the person with mental issues and not even considering holding the for profit company who made a machine that encourages their delusions accountable?

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Torture itself doesn’t work reliably. The possibility of it might get someone to open up when combined with giving someone the time to just open up or a positive reward. Torture itself is counterproductive as the person is just saying whatever the torturer wants to hear to make the pain stop.

        Psyops absolutely work.

        • idiomaddict@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 hours ago

          Torture isn’t effective for getting information out of people, but if your goal is to psychologically debilitate people, it’s totally effective

          • TwilitSky@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            So are general everyday workplaces. You don’t need to go to a black site in Afghanistan. Just come to my office.

            • idiomaddict@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 hours ago

              That’s because there are more than a few commonalities between the two. They’re not the same, but horrible lighting, little privacy, contradictory instructions/suddenly changing expectations are frequently used in both

      • kikutwo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 hours ago

        Torture isn’t verbal and psyops aren’t targeted to one person. Thanks for playing though.

      • kikutwo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        3 hours ago

        You would, but the shrink wasn’t remarking in physical but mental impacts just like chatgpt.