Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.
It also offered to help him write a suicide note to his parents.
No, it’s not wild at all. The system flagged the messages as harmful and did nothing. They knew and did nothing.
There’s no mention of that at all.
The article only says “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.” in reference to an example of someone telling the software that they could drive for 24 hours a day after not sleeping for two days.
That said, what could the system have done? If a warning came up about “this prompt may be harmful.” and proceeds to list resources for mental health, that would really only be to cover their ass.
And if it went further by contacting the authorities, would that be a step in the right direction? Privacy advocates would say no, and the implications that the prompts you enter would be used against you would have considerable repercussions.
Someone who wants to hurt themselves will ignore pleads, warnings, and suggestions to get help.
Who knows how long this teen was suffering from mental health issues and suicidal thoughts. Weeks? Months? Years?
https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/
and
Why do you immediately leap to calling the cops? Human moderators exist for this, anything would’ve been better than blind encouragement.
Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.
The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.
This is fluff. A prompt can be a single sentence, and a response many pages.
From the same article:
Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.
In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.
Also from the article:
Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.
And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.
The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.
But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.
Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.