Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.


There was another article from a very similar set of circumstances of a man originally from Portland going off the deep end with an AI relationship. He committed suicide but jumping off a bridge, not because a prompt told him to, but because of the deep psychosis from the long term engagement.
The chatlogs as reported were 55,000 pages long.
If those logs become public you’ll have your chance. I hope you don’t wear out your fingers in your attempt to replicate it.
I’m sure the psychosis was there at the beginning, regardless of the AI. I have seen people develop strange behavior after long term engagement…. But, they always gamed the system to do that. It was never natural.
It’s very sad regardless.