Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

The teenager discussed a method of suicide with ChatGPT on several occasions, including shortly before taking his own life. According to the filing in the superior court of the state of California for the county of San Francisco, ChatGPT guided him on whether his method of taking his own life would work.

It also offered to help him write a suicide note to his parents.

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    2 days ago

    Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

    Ok, so it did offer resources, and as I’ve pointed out in my previous, someone who wants to hurt themselves ignore those resources. ChatGPT should be praised for that.

    The suggestion to circumvent these safeguards in order to fulfill some writing or world-building task was all on the teen to use responsibly.

    During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.

    This is fluff. A prompt can be a single sentence, and a response many pages.

    From the same article:

    Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.

    Ah, but Adam did not ask these questions to a human, nor is ChatGPT a human that should be trusted to recognize these warnings. If ChatGPT flat out refused to help, do you think he would have just stopped? Nope, he would have used Google or Duckduckgo or any other search engine to find what he was looking for.

    In no world do people want chat prompts to be monitored by human moderators. That defeats the entire purpose of using these services and would serve as a massive privacy risk.

    Also from the article:

    As Adam’s mother, Maria, told NBC News, more parents should understand that companies like OpenAI are rushing to release products with known safety risks…

    Again, illustrating my point from the previous reply: these parents are looking for anyone to blame. Most people would expect that parents of a young boy would be responsible for their own child, but since ChatGPT exists, let’s blame ChatGPT.

    And for Adam to have even created an account according to the TOS, he would have needed his parent’s permission.

    The loss of a teen by suicide sucks, and it’s incredibly painful for the people whose lives he touched.

    But man, an LLM was used irresponsibly by a teen, and we can’t go on to blame the phone or computer manufacturer, Microsoft Windows or Mac OS, internet service providers, or ChatGPT for the harmful use of their products and services.

    Parents need to be aware of what and how their kids are using this massively powerful technology. And kids need to learn how to use this massively powerful technology safely. And both parents and kids should talk more so that thoughts of suicide can be addressed safely and with compassion, before months or years are spent executing a plan.