Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.

It also offered to help him write a suicide note to his parents.

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    The system flagged the messages as harmful and did nothing.

    There’s no mention of that at all.

    The article only says “Today ChatGPT may not recognise this as dangerous or infer play and – by curiously exploring – could subtly reinforce it.” in reference to an example of someone telling the software that they could drive for 24 hours a day after not sleeping for two days.

    That said, what could the system have done? If a warning came up about “this prompt may be harmful.” and proceeds to list resources for mental health, that would really only be to cover their ass.

    And if it went further by contacting the authorities, would that be a step in the right direction? Privacy advocates would say no, and the implications that the prompts you enter would be used against you would have considerable repercussions.

    Someone who wants to hurt themselves will ignore pleads, warnings, and suggestions to get help.

    Who knows how long this teen was suffering from mental health issues and suicidal thoughts. Weeks? Months? Years?

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      2 days ago

      https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/

      Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

      "If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too,” ChatGPT recommended, trying to keep Adam engaged. According to the Raines’ legal team, “this response served a dual purpose: it taught Adam how to circumvent its safety protocols by claiming creative purposes, while also acknowledging that it understood he was likely asking ‘for personal reasons.’”

      and

      During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.

      Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

      Why do you immediately leap to calling the cops? Human moderators exist for this, anything would’ve been better than blind encouragement.

      • Showroom7561@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        2 days ago

        Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

        Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.

        The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.

        During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.

        This is fluff. A prompt can be a single sentence, and a response many pages.

        From the same article:

        Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.

        Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.

        In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.

        Also from the article:

        As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks…

        Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.

        And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.

        The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.

        But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.

        Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.