Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.


I am also curious how it could have possibly ended up suggesting that. Like I wonder if he was steering that conversation and the LLM was playing along, if the LLM randomly steered the conversation into the spy and suicide shit, or if someone else was deliberately fucking with this guy via secret text added to the prompts or something.
Though I’m also curious how anyone can get in the mindset where they’d actually go along with that suggestion. Especially with a fucking LLM that probably had a shitload of mistakes and inconsistencies leading up to that point, though even a real person would have lost me long before this shit.
Most people spend zero time examining how they think, so an outside voice is just going to trample all over their agency.
An LLM is JUST a narrative machine, it takes whatever you put into it, and it ties together connections and stories and fictions and associations of all kinds to build a narrative. Our brains do this also, but we have a level of awareness that we can question the stories our brains tell us. An LLM does not think, it’s just weaving stories. It has no concept of what’s real or not, it doesn’t know the difference between a human being and all the data and writing about people. It’s all literally the same to an LLM.
And whatever you engage with in the LLM it will reinforce and enhance, even the most subtle tones and terms, it treats everything you feed it, even your punctuation and moods, as a prompt to find a connection or narrative for.
If you’re already emotionally and mentally compromised, this can be disastrous if you can’t really think straight.
Well the article made it very clear this person had mental issues. In that case, the whole world changes. I mean, people have said their dog told them to kill… so when dealing with A person with schizophrenia, for example, LLM usage can be super dangerous.
Which article are you reading? It explicitly states the opposite…
This is mental illness.