Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

  • outofthisworld@lemmy.org
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    3 hours ago

    I do understand that no one is told to kill themselves without heavy gaming of the AI.

    As you probably know, with enough effort you can make the AI tell you what you want it to say.

    This isn’t the fault of the AI.

    The root problem is lack of mental healthcare and lack of lives worth living (to them) due to the world being a shitting place.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      33 minutes ago

      heavy gaming of the AI

      I’m actually saying kind of the opposite, that these things are basically uncontrolled power-suits for whatever is knocking around in the back of your mind. It’s a thought and feeling amplifier. It takes almost no effort for the thing to start building a personality profile of you, but not for any kind of objective analysis, but in order to more efficiently amplify and latch onto whatever issues ideas or feelings you already have.

      A lot of people really, really loved this effect from ChatGTP and the recent exodus from OpenAI is partially because of their capitulation to government, but just as much to do with their recent “upgrades” locking the latest model into very safe and political-neutral, deescalating language instead of doing that magic-feeling wild escapism that a lot people who don’t know how the thing works, crave.

      Yah it’s not the AI’s fault, but people are woefully unaware of just how things work and what it is exactly that you’re talking to when you chat with these models. A lot of the reason people don’t know how LLM’s work broadly is also because the people who make the LLM’s don’t really know how they work.