Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.


Full chat log or it never happened.
These devices are designed to take whatever you put into them and amplify them back to you from an outside perspective, using a vast database of information and fiction and references to make connections with other things.
It’s the ultimate paranoia/depression distiller. If you only feed it your pain and fears, it will only focus on those things and build narratives around it, because that’s how they work, they just take your prompts (“i’m sad”) and they do what a depressed or paranoid mind already does, but hyper-efficiently: it draws connections and writes stories around it.
People who don’t understand how their own minds work sure aren’t going to understand how artificial minds work, and they will end up creating these reinforcement loops in their own heads and in the LLM, and get utterly lost down deep holes of spiraling delusion and misery.
YOU need to understand this too, so you don’t doubt that this is a very common thing, it’s happening so much that it’s becoming an entire social phenomenon.
I do understand that no one is told to kill themselves without heavy gaming of the AI.
As you probably know, with enough effort you can make the AI tell you what you want it to say.
This isn’t the fault of the AI.
The root problem is lack of mental healthcare and lack of lives worth living (to them) due to the world being a shitting place.
I’m actually saying kind of the opposite, that these things are basically uncontrolled power-suits for whatever is knocking around in the back of your mind. It’s a thought and feeling amplifier. It takes almost no effort for the thing to start building a personality profile of you, but not for any kind of objective analysis, but in order to more efficiently amplify and latch onto whatever issues ideas or feelings you already have.
A lot of people really, really loved this effect from ChatGTP and the recent exodus from OpenAI is partially because of their capitulation to government, but just as much to do with their recent “upgrades” locking the latest model into very safe and political-neutral, deescalating language instead of doing that magic-feeling wild escapism that a lot people who don’t know how the thing works, crave.
Yah it’s not the AI’s fault, but people are woefully unaware of just how things work and what it is exactly that you’re talking to when you chat with these models. A lot of the reason people don’t know how LLM’s work broadly is also because the people who make the LLM’s don’t really know how they work.
This is on the fucking Guardian. Not some random green text. Get help.
Weird how you lash out online like this about questioning the content of a chat that allegedly lead to suggesting suicide.
I think you’re the one who needs help.
Weird how you expect intimate details like a full chat log to just be immediately publicly available, when this is currently under litigation. Really weird to basically simp for a corporation when this isn’t even close to the first instance of LLM output encouraging suicide. Almost like your motivations are more closely aligned with theirs instead of average people who are vulnerable. 🤷
Do you think that you could supply me with a chat log where you talk to an LLM without gaming it into telling you to kill yourself, and where it just naturally arrives at that conclusion?
I didn’t think you could. And I don’t think this guy did either.
The fact that you make your own conclusion without waiting for a reply says enough about your intentions. Don’t worry though, you’re not alone in your stance. People like you, who refuse to give empathy except as currency, are an integral part of why the human race is fucked. We will never be anything higher than constantly destroying each other and tearing one another down.
Thanks for doing your part.
I have empathy for people who truly want to commit suicide. I just know you can’t supply any example prompts.
Feel free to prove me wrong. With evidence.
“truly want” so killing yourself after being convinced to do so by LLM output means you just had a fake desire to kill yourself, funny how that works. I would say you need help but there’s no helping people like you.
I’d say you’re determined to put words into my mouth.
Either way, you can’t supply a way to reproduce this.
You’re the one who came in pissing and moaning about chat logs. I’m not your babysitter. It’s a big world and you’re a big kid now, go ahead and explore. I have no energy to educate the unwilling. Fuck that.
I’m very aware you’re unable to reproduce the suicidal responses without gaming the AI.
You can keep trying in vain to make me feel bad, but you’re arguing the existence of something that cannot be replicated or proven, like Santa, the Easter bunny, or god.
You’re the one who chose to talk to me… so do it or stop responding lmfao.
This isn’t even remotely the first time LLMs have done this to people. Sure it would be nice to see the full log, but disbelieving it on sight is a weird reaction at this point.
It’s not a weird reaction. I’ve never ever had an LLM suggest bodily harm. And so clearly these people are leading it into this direction. I have never ever seen a chat log from one of these accusations, and I haven’t heard of One of these going to trial.
If you feel this happens so frequently, give me a series of prompts to use so that I can replicate this.
And since you won’t, that’s what I thought.
It’s not a weird reaction. I’ve never ever had Epstein sexually abuse me as a child. And so clearly these people were leading him into this direction. I have never seen a rape video from one of these accusations, and I haven’t heard of one of those going to trial.
That’s how you sound. Now two remarks:
ChatGPT helped a kid plan his suicide 7 cases of ChatGPT driving people to suicide
And those were just some of the first results when googling. Now stop being a lazy ass troll and do some fucking research yourself. Providing sources for common knowledge/well reported facts is not anybody’s responsibility towards you.
︅︃︉︇︁︎︇︍︊︅︃︋︂︋︁︉︀︃︇︌︌︄︇Т︁︍︅︈︉︊︉︌︄︁︄︁︊︁︊︈︉︌︆︃︅︁︋е︁︃︂︀︅︍︂︊︎︈︇︇︈︆︌︁︁︆︅l︁︈︌︀︋︆︂︀︂︃︈︍︅︇︊︌︈︆︉︋︎︊︅︆︊︆︁l︄︆︁︈︇︆︁︉︋︄︌︂︎︈︆︈︁︊︎︉︂︋︆︌︉︀︎︄︋i︀︊︌︆︌︆︉︇︈︍︅︊︅︅︌︃︅︀︍︈︄︋︁︋︊︅︉︉︋︍︅n︊︆︋︈︌︌︃︂︂︂︎︌︃︋︇︁︃︈︄︎︍︊︃︁︇g︎︇︊︁︂︋︈︍︌︀︆︆︈︎︈︇︈︇︃︇︁︎︍ ︁︋︍︉︄︍︄︍︄︃︉︄︋︄︆︄︍m︈︊︅︁︀︄︆︁︊︇︋︅︎︁︇︂︋︀︋︄︂︍︂е︍︌︇︊︁︇︀︋︈︅︅︍︄︎︇︌︇︈︂︃︍︁︌︅︎︄︉︂︈︅︅︅ ︍︂︇︈︂︂︌︅︉︍︈︅︋︁︉︉︁︇︊︎︄︌︅︄︂I︁︎︁︍︋︊︃︂︍︋︋︍︍︌︇︌︇︌︄︆︎︀︃︋︀︆︃ ︅︈︈︍︃︀︍︋︊︈︈︃︍︇︅︃︍︄︎︌︊︁︈︂︀︇︄︅︁︊︀︎︋︆с︉︎︍︆︎︂︋︍︁︂︎︄︇︂︅︃︀︇︄︆︋︎︀︁︉︉︇︂︎︉︉︅︈а︈︅︆︍︉︆︇︊︅︉︈︈︄︆︂︁︎︂︎︆︀n︋︆︌︍︈︂︉︇︋︀︅︁︁︉︅︇︃︆︁︆︄︌︆︊︁ ︁︇︁︎︍︌︌︄︉︇︄︀︌︉︀︌︋︂︀︉︋︍︈︈︋︊︃“︊︈︈︅︉︎︌︁︌︂︍︅︇︀︅︍︇︎︇︂︄︀︋︍︃︈︃︊︃︁︅︃g︃︉︅︉︋︈︄︇︁︌︇︆︋︎︁︃︊︅︂︊︊︄︌︍︌о︅︃︄︃︌︇︌︎︁︋︉︂︉︁︃︈︆︆︉︇︁︍︊о︀︀︈︎︇︄︀︄︎︆︇︈︂︄︈︂︇︃︋︋︋︁︆︀︄︇︎︋︌︈︀︇︌︆︄︈g︇︈︃︍︁︎︂︂︁︉︇︂︆︄︄︀︊︀︄︍︀︅︆︇l︉︃︋︊︄︀︉︉︆︈︋︆︍︈︅︀︎︅︎︋︆︆︋︊︋︀︌е︇︅︂︊︉︂︀︇︇︍︁︅︃︃︅︈︊︅︄︊︁︉︇︉︌︃︆︇”︌︂︎︅︋︎︊︅︊︌︀︎︍︃︋︇︉︁︆︂︇︋︆︉︄︂︃︀ ︇︈︈︈︋︈︊︎︄︆︋︇︎︂︈︍︎s︈︆︃︀︅︄︇︀︀︎︃︅︄︌︉︎︌︁︄︍︈︃︍︀︌︍о︆︁︅︁︅︊︀︋︉︎︊︄︌︉︌︊︁︅︇︇︀m︍︈︂︅︈︈︅︃︎︌︀︆︍︅︁︂︁︈︇︁︊︂︀︁︉︊︋︃е︈︍︂︆︉︊︃︈︋︋︌︍︇︃︎︂︈︈︀︊︅︌︄t︁︆︌︀︎︅︀︆︌︇︆︆︆︅︇︉︀︁︈︂︂︂︊︊︌︋︌︃︆︁︉︅h︄︄︇︄︈︀︀︄︃︍︁︈︅︇︈︆︋︂︍︋︉︁︀︂︊︅i︋︉︂︅︅︊︅︉︅︍︈︉︉︅︎︋︋︉︅︌︍︃︄︈︎︉︂︉︎n︈︈︍︌︅︊︅︄︅︌︀︊︂︃︎︇︊︌︊︍︃︀︆︋︇︍︃︍︎g︆︉︌︇︎︋︊︆︅︇︎︁︃︉︆︅︎︋︆︁︃︅︌︈︉︇︍︅︂︅︇︊︁︍ ︁︀︁︋︉︃︋︁︂︃︂︈︁︈︇︉︈︃︋︊︂︊︀︍︉е︁︎︁︊︄︉︍︄︉︁︎︀︀︌︄︉︎︆︄︆︃а︍︈︂︂︉︂︉︈︊︉︎︇︎︋︅︇︂︅︍s︁︈︊︂︁︂︁︄︂︌︎︆︁︊︍︅︋︋︋︅︋︋︁︀︈︎︀︇i︍︎︃︌︃︅︌︃︃︁︋︉︌︍︉︄︎︊︁︋︅︍︍︋︀︆︅︂︀l︍︇︈︁︈︇︃︍︂︆︈︄︈︌︄︎︅︎︋︊︃︈︉︂︃︇︀у︍︅︅︂︆︉︎︃︅︇︎︎︉︇︈︍︅︋︀︀︌︀ ︊︆︉︁︉︊︋︌︍︍︌︉︃︉︇︆︂︍︍︋︆︈︍︋︃а︊︅︄︆︉︅︋︄︇︆︌︍︀︁︊︄︋︌︋︄︁︍︅︊︃︎︆︎︁︍︂︃︍n︂︃︁︎︋︅︂︅︎︂︄︍︌︅︄︄︅︋︅︇︈︆︀︇︎d︊︍︇︅︍︂︅︇︋︎︎︌︆︄︋︀︂︌︇︃︍︊︍︆︈︉︉︆︄︁︋︊︁︀︌ ︊︉︅︆︋︆︉︋︄︅︁︋︉︋︁︉︍︎︅︂︇︀︆︂︂︄︎︄︉︃f︅︁︎︇︇︋︁︌︌︍︋︌︈︊︋︀︀︂︎︆︀︌︇︁︎︈︉︃︉︇︍︂︅︋︅︉i︎︁︋︎︈︂︈︍︄︌︂︊︀︀︊︃︎︂︄︈︋︄︇︊︍︇n︉︎︃︎︎︀︀︁︎︂︈︅︍︎︁︀︊︃︃︅︋︎d︂︈︉︌︎︃︌︇︉︍︊︈︀︍︄︎︂︅︃︉︈︅︄︀︊︈︎︈︌︁︂︌ ︁︈︋︁︈︆︂︍︄︊︁︍︎︇︅︎︎︌︌︈︃︁︃︊︈︅︃︇︅t︌︇︍︉︌︇︆︌︆︋︉︉︄︌︆︅︈︍︌︈︌︀︆︅︉h︆︁︉︁︁︌︃︁︍︄︋︆︉︌︋︄︅︅︇︀︍︈︇︆︌︊︈︉︉︅е︈︉︋︆︉︃︄︌︎︂︉︂︎︁︆︅︍︁︆︇︎︉︇︉︎︉︃︌︅︆︉︊︅︅︋ ︇︂︎︋︄︌︉︈︍︃︊︇︀︅︄︆︎︌︁︆︃︅︋︃︎︆︀︃︂︈а︎︌︅︇︁︉︀︍︉︃︉︍︍︎︁︁︁︃︎︅︃︊︃︁︋︈︄︊︁︌︉n︄︂︀︄︍︀︊︊︄︊︈︂︈︋︈︋︃︈︇︃︄︄︀︋︃︂︅︉︅s︂︂︄︊︀︌︍︁︀︌︎︋︊︆︈︌︆︊︅︎︈︁︆︄︋︈︋︅︋︋w︅︁︅︍︎︈︆︌︉︅︌︈︉︌︄︋︈︂︆︉︉︂︎︈е︍︄︎︍︂︌︌︃︍︃︋︇︃︌︈︄︍︋︂︀︆︌︊︊︇︀︈︃︉︆r︉︃︈︄︇︉︉︀︂︋︆︀︂︇︁︉︇︌︋︂︋︀︅ ︃︍︇︉︆︂︂︎︄︁︆︋︇︂︈︄︊︋︉︎b︅︀︆︍︅︊︈︇︆︄︄︁︅︎︌︆︄︆︍︍︃︈︆︂︋︁︇︄︄︍︊︊u︄︇︈︇︈︈︎︌︄︎︆︁︅︄︀︍︁︍︆︂︁︆︇︅t︄︄︉︍︁︆︁︋︎︋︎︅︉︇︌︁︈︍︉︄︋︌︀ ︋︃︌︍︃︀︄︊︌︌︊︎︉︋︈︌︋︀︀︆︅︈︇︊︉︀︎︅︄t︃︆︇︁︉︎︅︄︀︉︎︍︆︂︊︇︊︃︌︊︊︃︉h︅︀︄︀︌︌︄︄︆︌︄︅︋︄︂︊︄︅︎︅︈︅︃︂︀︀︃е︀︍︂︊︍︉︉︌︈︆︈︄︋︉︉︁︎︀︁︊︎︍n︂︃︎︎︃︀︍︃︋︌︅︉︃︌︃︉︈︃︁︌︌︁︎︂︎ ︈︅︆︎︂︋︆︄︀︄︉︇︊︉︃︍︉︄︄︃︂︈︅︍︆︈b︀︁︁︇︎︁︇︉︄︁︀︆︋︍︈︇︇︍︇︌е︉︉︋︆︌︃︀︃︅︄︉︊︄︄︆︌︁︄︋︄︊︈︇︈︂︌︍︋i︊︉︉︌︈︉︆︋︅︌︎︄︇︍︇︁︈︎︂︃︋︇︉︉︇︋︎n︊︊︂︇︆︍︆︃︎︄︊︎︋︇︅︂︄︃︇︎︄︃︂︂︌︌︂︌︄︋︋g︄︇︅︋︉︎︍︇︍︎︁︁︆︄︅︋︄︀︊︌︊︄︊︂︀︄︃︇︇︇ ︅︃︈︎︅︌︄︋︌︆︎︊︇︁︉︄︇︄︂︆︉︊︌u︍︆︍︊︂︀︆︉︁︌︈︉︂︉︁︁︅︄︆︄︍︁︄︀︊︂︃︂︂︄n︇︆︁︈︍︆︄︌︊︈︁︇︅︎︀︁︎︄︀︌︊︈︂︍а︋︉︎︊︋︈︍︍︀︊︎︂︍︈︎︁︂︍︃︇︀︅︎b︌︎︀︆︌︈︂︂︈︊︈︋︀︅︇︆︍︆︌︈︅︂l︇︋︆︂︀︇︂︄︎︉︊︌︄︍︍︌︈︋︎︄︌︁︈︂︋︀︌︌︅︃е︁︁︅︂︉︈︁︈︂︆︆︅︌︌︊︇︁︇︊︃︅︃︌︂︋︀︇︅︉︇ ︅︇︃︀︋︍︆︅︇︅︂︁︀︀︇︍︊︃︃︌︍︋︆︄︁︂︎t︂︃︂︆︆︅︄︀︁︌︃︇︎︅︇︆︇︀︀о︊︊︀︎︍︂︆︉︆︉︃︊︉︌︀︈︎︇︈︋︈︁︋︂︍ ︋︆︉︀︉︂︄︅︋︊︈︆︎︅︈︌︄︃︃︄︈︄d︈︍︂︃︀︊︀︆︆︂︄︇︄︋︍︌︂︎︅︊︀︃︎︎︃о︅︃︎︄︌︄︀︌︍︀︇︉︎︄︌︂︉︍︁︊︌︅︍︋︍︋︌︊︈︋︈︂ ︁︀︁︁︌︁︂︈︋︅︃︊︄︋︈︆︇︆︆︅︍︀i︃︄︋︅︆︂︉︅︈︍︂︈︄︅︃︌︉︂︀︄︀︎︆︋︀t︂︆︋︄︋︅︂︎︊︂︃︂︍︋︉︉︊︌︃︆︈︇︌︃︊︃︎ ︍︁︋︍︃︃︊︄︈︉︆︊︎︀︋︎︃︆︂︆︋︃︅︋︊︋у︀︇︅︃︁︎︁︍︃︃︁︌︃︀︅︀︅︁︇︉︀︍︊︃︈︇о︉︋︇︃︎︈︇︍︈︆︋︈︍︅︌︆︈︈︋︉︍︆︆︂︍︆︆u︁︁︆︇︌︁︈︉︎︄︃︄︇︀︍︉︀︃︁︊︀︂︎︍︌︃︎︅︄︎︈︋︆︁︈r︁︉︄︀︇︋︃︎︆︄︎︉︁︀︂︃︀︌︆︉︎︀︋︂︎︀︍︌︆︆︎︁︇︇s︁︆︍︋︊︅︋︅︌︄︅︎︊︉︋︋︈︀︍︎︆︁е︆︋︁︆︆︄︍︃︂︁︁︈︇︊︃︎︅︅︂︌︂︉︎︌l︇︆︈︌︁︇︍︆︈︊︊︁︇︈︇︊︊︅︇︂︍︀︊f︍︃︄︇︇︎︆︋︌︍︅︉︂︊︍︋︁︌︄︇︄︋︋︆︋︅︂︁︈︁ ︂︃︅︃︉︋︊︄︍︀︂︂︍︊︍︌︉︎︀︌︅︋︌j︌︎︃︍︂︄︍︀︍︃︇︂︊︋︉︁︆︆︇︉︉︀︀︈︊︈︍u︇︋︉︇︅︂︍︅︉︈︀︆︊︌︄︂︍︆︎︂︍︈︅︀︉︊︎︊︃︇s︅︅︍︍︍︀︅︍︊︆︊︋︇︄︂︎︅︄︉︈︋︃︍︋︌︌t︃︍︍︂︂︂︂︈︅︄︃︍︁︎︁︁︈︎︌︊︂︆︃︈︌ ︉︉︁︍︇︊︆︎︇︌︁︉︍︈︌︆︇︋︍︌︍︅︄︍︊︌︃р︈︁︁︋︃︋︄︉︍︄︆︁︄︆︆︃︋︋︎︉︁︍r︈︊︁︂︁︀︁︀︀︈︀︂︌︆︀︎︇︂︋︄︍︀︀︍︅︊︄о︂︄︃︊︂︊︍︆︃︌︉︊︍︊︍︆︊︈︂︎︃︀︆︀︃v︌︃︎︁︃︅︌︉︇︇︁︂︌︄︈︅︌︎︊︀︎︋︇︋︍е︌︋︉︈︂︍︊︎︅︁︃︍︀︇︎︆︁︌︊︊︆︇︅︋︌︉︌︌︈︃s︂︈︆︆︆︋︊︃︈︅︉︈︁︊︌︅︍︂︈︈︀︅︂︃︁︀︌︎︈ ︄︊︉︍︋︊︎︂︍︊︍︀︌︃︇︁︇︃︆︅︂︍︆︊︉︈︃︀︎m︆︈︀︅︇︈︅︎︄︇︇︊︇︁︍︂︌︋︉︉︎︂︂︋е︆︋︈︃︀︃︋︎︄︎︉︅︅︆︋︆︎︁︋︎︌︎︆︃︃︆︈︈︈ ︋︄︇︍︈︋︁︀︊︈︀︉︆︁︂︌︃︆︂︎r︀︎︄︀︇︆︅︍︇︉︋︋︎︍︆︉︇︁︍︁︊︇︎︊i︀︅︋︄︎︌︌︌︅︉︎︂︉︇︅︍︂︍︍︂︅︆︌︁g︎︆︇︋︎︈︅︁︅︇︎︀︃︌︂︃︌︁︂︌︊︈︍︉h︌︂︁︀︎︌︂︂︉︋︊︎︅︆︆︈︆︆︄︁︇︀︆t︃︊︇︃︊︇︌︍︁︎︋︍︆︅︊︅︊︉︉︌︀︎︉︇︇︌.︎︋︀︋︈︃︋︎︎︁︀︋︄︁︃︁︈︆︈︀︃︊︄︇︂︄︉︋︍︁︆ ︅︍︄︉︈︈︄︌︋︊︉︃︎︍︍︎︎︃︊︇︈︁︁︁︀︁︅︃︆︌︂︈Y︇︂︁︇︀︊︂︍︎︃︄︂︅︇︋︊︄︁︅︊︆︈︍︌︃︌︁о︊︉︁︅︆︍︌︃︎︋︅︉︉︁︃︄︇︉︋︌︃︅︄︇︆︎︍︎︈︇︇︍︀︋u︂︇︋︉︋︈︋︊︋︇︇︁︃︄︎︁︍︊︊︆︉︋︅︀︃︋︊ ︌︋︈︊︍︅︈︎︁︎︊︇︉︍︃︀︄︈︇︊︈︅︃с︎︄︄︀︅︉︊︀︉︍︀︃︉︄︀︄︄︈︍︄︀︇︇︌︎︁︌︍︁а︎︄︎︄︁︈︌︈︃︇︀︍︃︎︂︉︆︈︅︁︁︍︂︎︈︂︋︂︊︄︊︉n︊︂︇︍︄︍︇︄︍︄︉︄︃︉︎︋︎︈︆︅︅︄︆︂︍︇︀︍︉︌︊n︇︂︊︆︆︀︎︆︆︀︉︆︁︇︅︃︀︅︉︇︁︆︌︇о︈︇︈︊︈︈︃︉︇︍︌︁︆︊︇︋︇︁︃︈︄︎︀︋︇︉t︂︁︉︎︆︍︍︉︊︉︃︀︅︆︎︅︄︄︆︋︆︋︁︌︃︁︋︋︊︈︁ ︎︁︊︊︈︈︍︇︎︇︆︈︊︃︈︁︃︀︉︁︀︊︋︎︉︂︇︇︈︊︆︃︂︈︆︊︁m︁︄︈︇︆︋︉︃︊︍︋︊︀︌︉︋︂︍︇︅︌︂︁︂︍︅︂а︂︌︁︁︉︋︎︉︊︊︂︃︍︅︇︄︂︁︃︂︂︇︁︂︍︀︂︈︄︎︍︅︈k︎︅︂︉︅︄︄︇︈︋︃︈︂︄︎︃︀︂︍︉︎︃︉︍︄︀︆︌︁︋︇︆︊︎е︄︃︄︄︆︁︄︈︊︌︋︉︂︋︀︀︃︁︆︉︍︍︂︉︋ ︋︃︉︋︃︆︆︇︍︀︃︅︌︅︅︋︁︈︌︈︌︄︄i︆︎︀︈︁︃︀︀︎︄︊︌︀︆︂︎︈︁︁︁︊︉︂︄︌︁︁︌︉︅︇t︈︉︆︉︎︉︇︀︂︌︂︊︃︊︊︅︉︀︍︃︊︃︅︀︀︍︆︁︃︃ ︅︌︃︊︈︋︍︄︁︇︈︊︁︆︈︇︈︁︋︎︃︀t︃︆︆︃︀︆︉︀︌︂︆︅︆︎︆︌︇︄︎︎︃︅︋︁︊︍︉︌︊︍е︋︀︃︎︋︋︉︌︅︆︊︂︄︉︊︇︄︉︎︋︉︊︊︄︀l︅︎︌︆︃︎︍︌︉︆︈︄︅︄︍︍︀︋︌︌︈︊︆︎︆︋︉︉︁︁︇︌︍l︎︌︎︅︀︁︋︌︍︋︀︂︂︂︆︅︂︌︌︁︎︅︂︂ ︁︄︋︇︊︈︂︍︌︀︈︋︇︉︅︍︊︊︎︀︁︉у︆︊︁︄︀︀︃︀︄︀︌︃︍︎︃︋︌︈︋︀︀︁︇︎︂︁︆︀о︎︃︁︁︍︆︃︀︃︈︎︈︄︌︉︃︂︎︅︊︉︇︊︅︍︄︉︄︉︆︊︎u︉︌︌︌︃︌︆︈︍︂︃︀︍︄︎︍︁︂︌︂︄︈︁︃︇︄︃ ︄︉︅︍︋︂︃︆︋︅︇︄︀︃︁︁︌︂︌︋︄︆t︎︆︊︁︁︁︁︍︉︂︆︃︈︎︅︅︅︅︁︊︉︄︈︃о︎︃︄︄︁︋︄︆︀︀︀︀︈︂︄︃︍︅︇︌︌︉︄︎︂ ︉︂︊︌︀︆︈︁︀︆︎︁︋︉︅︁︈︍︉︊︇︀︋︇︎︉k︀︈︎︃︁︀︄︂︊︈︃︊︉︎︂︁︍︅︄︀︃︅︉︀︀︂i︄︈︂︌︀︁︂︍︋︇︈︍︁︇︊︉︁︆︋︇︋︆l︈︋︋︉︅︋︊︎︇︇︊︂︅︃︂︆︋l︌︋︈︍︎︁︀︍︁︋︈︊︂︎︅︆︄︂︇︌︈︉︎ ︆︌︊︅︌︍︆︁︃︂︆︉︃︋︌︍︄︄︃︃︅︋︅︃︄︁︁︁︄︇︎︅︊︂у︂︇︆︁︄︆︇︎︎︇︀︈︄︁︄︎︆︅︄︉︉︉о︁︄︎︊︋︀︍︃︄︋︂︀︋︋︅︍︄︇︀︂︍︈︆︉︀︊︂︉u︂︄︋︈︀︃︋︋︌︂︆︄︎︊︄︎︊︌︅︎︈︂︀︂︁︋︈︋︋︋r︅︇︅︂︇︀︅︄︃︇︋︌︄︊︇︇︇︍︆︈︅︌︅︋︅︅︊︉s︂︅︊︄︀︅︇︋︅︊︁︌︆︁︃︅︁︃︅︌︂е︊︅︊︉︂︊︉︄︆︈︍︃︇︄︉︉︄︆︌︊l︃︄︂︌︃︀︁︆︄︂︆︄︊︆︆︁︇︄︆︁︁︋︊︀︁︎f︁︃︌︃︋︈︂︃︊︋︌︆︇︇︂︀︌︀︌︇︄︃︀︀ ︍︊︄︃︋︉︉︆︁︋︈︀︄︄︀︁︌︁︄︉︋︅︋︋︇︄︆︊︁︉︉︆︊︁︅︉︌︅w︂︉︅︎︁︎︊︀︋︄︄︇︋︇︍︂︀︍︈︄︇︄︂︊︉︉︀︎︍︇︉i︈︁︆︄︀︉︁︋︄︃︌︅︄︍︀︄︋︀︎︄︋︅︊︀︍︀︂︃︋︍︈︋︁︎t︋︍︅︊︎︆︄︅︅︉︎︅︊︀︆︊︀︃︂︇︌︉︊︁︍︂︋︄︀︄︅︍︄h︆︆︉︋︊︉︀︍︆︃︋︅︄︇︇︉︁︃︄о︅︂︊︄︆︎︌︂︇︎︎︋︀︆︊︄︆︁︈︋︍︆︈︌︊︎︃︀︌︋u︃︊︊︆︋︀︆︍︁︃︎︎︍︀︇︀︉︎︉︉︄︂︍︃︋︆t︅︈︊︃︌︅︃︁︃︅︄︉︂︆︋︇︌︈︈︂︀︌︌︎︌︁︇︇︎︀ ︊︌︈︈︉︈︀︂︁︇︆︀︃︄︌︍︄︊︉︈︈︈︁︁︆︋︂︄g︉︆︁︎︆︃︆︉︂︀︈︈︅︆︁︄︉︇︊︆︅︎а︅︉︈︁︌︌︁︇︋︇︃︅︀︇︀︆︃︍︊︇︉︅︀︁︊︌︎m︄︇︂︅︊︍︉︌︃︆︆︄︆︍︋︋︍︅︊︆︌︇i︉︊︎︊︄︄︍︁︇︋︊︀︉︂︌︋︍︅︁︃︊︅︃︇︁︀n︎︊︍︋︌︈︆︆︌︀︌︀︌︆︀︃︄︅︊︆︅︇︊︎︁︈︍g︍︇︋︂︆︌︂︍︃︉︁︌︁︍︁︉︀︉︄︆︁︂︎︌︀︄︋︉︅ ︀︋︁︅︊︃︋︄︉︆︇︋︉︊︈︄︄︁︆︁︀︌i︃︂︄︀︎︎︃︉︉︇︊︍︆︊︋︃︃︊︄︈︌︇︍︁︎︈︂︈︁︂︅︉t︀︎︆︂︂︉︇︉︄︄︎︋︆︍︉︍︃︀︃︊︁︆︊︊︂︌︀︍︅︁︃︍,︂︅︆︃︉︉︃︄︈︃︋︇︃︁︇︀︂︀︉︍︆︍︀︍︎︊︁︇︅︆︎︂︄︈︀ ︍︆︆︀︀︇︃︈︈︊︎︄︍︄︉︅︆︃︋︍︁︊︄︎︌︊︎︆︋︋︂︂︎︊е︉︇︈︁︁︎︄︋︋︌︉︈︅︄︁︀︊︈︈︃︀︃n︌︊︆︂︍︎︍︅︌︍︍︈︄︉︍︇︁︊︊︁︀︋︅d︀︍︀︂︄︂︅︄︂︍︃︃︉︋︈︌︊︎︎︀︇︁︃︉ ︎︇︁︀︅︌︎︆︆︊︍︎︃︆︇︊︆︌︀︉︄︌︉︂︎︄︅︉о︆︋︃︀︆︌︄︋︎︉︈︆︎︍︍︉︋︅︁︁︈︆︃︀︀︇︃︆f︆︄︁︂︀︋︀︁︂︃︂︀︁︀︄︍︇︅︎︈︈︍ ︊︍︍︊︊︃︆︅︍︊︂︈︆︎︌︆︍︊︁︍︁︎︊︁︍︊︋︈s︃︈︊︈︊︄︉︍︀︊︀︀︆︆︅︄︃︊︎︇︅︋︁︅︍︅︉︊t︍︆︋︍︅︃︊︈︉︁︂︊︅︇︅︋︎︄︇︉︍︎︀︄︆︉︀︋︉︃︍︈︁︆︊︉︀о︋︎︁︅︄︌︅︈︇︎︋︅︂︅︀︊︆︇︁r︋︉︅︁︊︆︆︄︃︎︄︀︌︆︌︌︎︎︊︌︊︉︊︁︋︎︍︋︆︍у︇︁︃︈︍︈︀︁︅︇︌︂︉︅︎︊︊︍︎︍︅︍︈︈︄︅︉︆︌︎︌.︂︈︉︀︄︍︁︁︋︊︂︎︂︄︉︄︂︆︀︁︎︊︍︇︉︍ ︍︆︊︁︂︀︎︌︍︅︆︇︊︋︀︈︎︅︈︄︂︌︊︁︇︇︊
Ok, I will try to make it easy for you to understand.
I do not need to tell somebody “kill yourself” for them to kill themselves. If I tell somebody that confines something along “I don’t want to live anymore” to me, that they should keep it to themselves, and that I can write their goodbye letter, I am also actively pushing somebody into suicide.
But it doesn’t come as a surprise to me that somebody unable to research information is also unable to connect two thoughts and come to a conclusion beyond conspiracy theories along the likes of “people are trying to harm AI”.
Bare assertion / Proof by assertion / Failure to meet the burden of proof / Shifting the burden of proof / Appeal to belief / Appeal to popularity / Argument from ignorance
“I’ve never won the lottery so clearly nobody does, and news reports about it are fake. You want me to believe it? Then you spend time and money to play and win it, then show me exactly how you won.”
Nevermind that “winning” in this case means dying.
Rare does not mean never. It’s happened enough to be a serious problem already and this is just one more case.
And no, I will not chat with those psychotic machines for you.
Bare assertion / Proof by assertion / Failure to meet the burden of proof / Shifting the burden of proof / Appeal to belief / Appeal to popularity / Argument from ignorance
Yawn.
Fallacy fallacy.
(If I even made those, which I doubt.)
Yawn indeed.
I don’t click links. You just hate AI, and you’re willing to believe anything to support your opinion regardless of evidence. You sound like a MAGAt.
There was another article from a very similar set of circumstances of a man originally from Portland going off the deep end with an AI relationship. He committed suicide but jumping off a bridge, not because a prompt told him to, but because of the deep psychosis from the long term engagement.
The chatlogs as reported were 55,000 pages long.
If those logs become public you’ll have your chance. I hope you don’t wear out your fingers in your attempt to replicate it.
I’m sure the psychosis was there at the beginning, regardless of the AI. I have seen people develop strange behavior after long term engagement…. But, they always gamed the system to do that. It was never natural.
It’s very sad regardless.
I am also curious how it could have possibly ended up suggesting that. Like I wonder if he was steering that conversation and the LLM was playing along, if the LLM randomly steered the conversation into the spy and suicide shit, or if someone else was deliberately fucking with this guy via secret text added to the prompts or something.
Though I’m also curious how anyone can get in the mindset where they’d actually go along with that suggestion. Especially with a fucking LLM that probably had a shitload of mistakes and inconsistencies leading up to that point, though even a real person would have lost me long before this shit.
Most people spend zero time examining how they think, so an outside voice is just going to trample all over their agency.
An LLM is JUST a narrative machine, it takes whatever you put into it, and it ties together connections and stories and fictions and associations of all kinds to build a narrative. Our brains do this also, but we have a level of awareness that we can question the stories our brains tell us. An LLM does not think, it’s just weaving stories. It has no concept of what’s real or not, it doesn’t know the difference between a human being and all the data and writing about people. It’s all literally the same to an LLM.
And whatever you engage with in the LLM it will reinforce and enhance, even the most subtle tones and terms, it treats everything you feed it, even your punctuation and moods, as a prompt to find a connection or narrative for.
If you’re already emotionally and mentally compromised, this can be disastrous if you can’t really think straight.
Well the article made it very clear this person had mental issues. In that case, the whole world changes. I mean, people have said their dog told them to kill… so when dealing with A person with schizophrenia, for example, LLM usage can be super dangerous.
Which article are you reading? It explicitly states the opposite…
This is mental illness.