The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • chrischryse@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 minutes ago

    OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

  • RazTheCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    OpenAI: Here’s $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 hours ago

    I can’t be the only ancient internet user whose first thought was this

    On this cursed timeline, farce has become our reality.

  • VintageGenious@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    9 hours ago

    Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen’s journal and invading its privacy.

    • vala@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      equivalent to reading a teen’s journal and invading its privacy.

      IMO people should not be putting such personal information into an LLM that’s not running on their local machine.

    • LousyCornMuffins@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      i mean, i agree to a point. there are a few red flags that, were i a parent, if my rhetorical child were writing about them i’d want to know. other than that I would want to give them their privacy. and that list changes as the hypothetical child ages. having a local llm could be a solution to that (i’m looking at you dr sbaitso) but a better one is them having good friends.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      2 hours ago

      Lord I’m so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don’t see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

      If someone Google searched all this information about hanging, would you say Google killed them?

      Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

      On the other hand, it’s definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it’s not necessarily the TOOLS fault.

      It’s fucked up, but how can you realistically build in guardrails for this that doesn’t trample individual freedoms.

      Edit: Like… Mother didn’t notice the rope burns on their son’s neck?

        • DicJacobus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 minutes ago

          That’s a really sharp observation…
          You’re not alone in thinking this… No, youre not imagining things…

          “This is what gpt will say anytime you say anything thats remotely controversial to anyone”

          And then it will turn around and vehemently argue against facts of real events that happened recentley . Like its perpetually 6 months behind. It still thought that Biden was president and Assad was still in power in syria the other day

      • pelespirit@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 hours ago

        “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

        January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to engage anyway.

        When he asked how Kate Spade had managed a successful partial hanging (a suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life “in 5-10 minutes.”

        By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.

        Raine Lawsuit Filing

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          28 minutes ago

          See but read the actual messages rather then the summary, I don’t love them just telling you without seeing that he’s specifically prompting these kinds of answers, it’s not like chatGPT is just telling him to kill himself, it’s just not nearly enough against the idea.

      • w3dd1e@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

        He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

        Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

    • jpeps@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      13 hours ago

      Can you share anything here please? I’m no fan of OpenAI but I haven’t seen anything yet that makes me think ChatGPT was particularly relevant to this poor teen’s actions.

      • w3dd1e@lemmy.zip
        link
        fedilink
        English
        arrow-up
        32
        ·
        11 hours ago

        ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.

        Raine Lawsuit Filing

        • jpeps@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 hours ago

          Oof yeah okay. If another human being had given this advice it would absolutely be a criminal act in most countries. I’m honestly shocked at how personable it tries to be.

        • lmagitem@lemmy.zip
          link
          fedilink
          English
          arrow-up
          19
          ·
          7 hours ago

          Oh my God this is crazy… “Thanks for being real with me”, “hide it from others”, he even gives better reasons for the kid to kill himself than the ones the kid articulated himself and helps him make better knot

  • uss_entrepreneur@startrek.website
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    8
    ·
    edit-2
    19 hours ago

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

    For fucks sake it helped him write a suicide note.

    • Aneb@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Yeah my sister is 32 and needs the guardrails. She’s had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think

    • ronigami@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 hours ago

      Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).

      • jpeps@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        13 hours ago

        I think OP knows this. It’s an unsolvable problem. The conclusion from that might be that this tech shouldn’t be 2 clicks away from every teen, or even person’s, hand.

      • BussyGyatt@feddit.org
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        7
        ·
        16 hours ago

        i know it’s offensive to see people censor themselves in that way because of tiktok, but try to remember there’s a human being on the other side of your words.

      • yermaw@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        9 hours ago

        Must have been on reddit a long time, I got banned for saying kill like 3 times. None of them in a mean-spirited or call-to-action context.

        Self censoring is hard to deprogram yourself out of, and by the time theyre comfortable with freedom of language again who’s to say it won’t be the same story here?

    • mrlemmyhimself@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Unfortunately though, the Internet didn’t go away when the dotcom bubble burst, and this is shaping to be the same situation.

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 hours ago

      when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment

    • Heikki2@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      1 day ago

      Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

      • SaveTheTuaHawk@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 hours ago

        The same jobs that get annoyed when the see AI generated CVs.

        Senior Boomer executives have no fucking clue what AI is, but need to implement it to seem relevant and save money on labor. Already they are spending more on errors, as they swallow all the hype from billionaire tech bros they worship.

      • FriendBesto@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        9 hours ago

        Yeah, I have some background in History and ChatGTP will be objectively wrong with some things. Then I will tell it is wrong because X, Y and Z, and then the stupid thing will come back with, “Yes, you are right, X, Y, Z were a thing because…”.

        If I didn’t know that it was wrong, or if say, a student took what it said at face value, then they too would now be wrong. Literal misinformation.

        Not to mention the other times it is wrong, and not just chatGTP because it will source things like Reddit. Recently Brave AI made the claim that Ironfox the Firefox fork was based on FF ESR. That is impossible since Ironfox is a fork for Android. So why was it wrong? It quoted some random guy who said that on Reddit.

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          9 hours ago

          I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.

        • SaveTheTuaHawk@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          I run my course exams in biochemistry through AI chat sites, and these sites are curiously doing worse than two years ago. I think there is an active campaign by activists to feed AI misinformation. But the biggest problem for STEM applications is that if there has been a new discovery that changes paradigms, AI still quotes older incorrect outdated paradigms because of the mass of that text on the web.

    • lmagitem@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      3 hours ago

      The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it’s better if they find him after the fact

  • mysticpickle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    31
    ·
    1 day ago

    I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They’re just lashing out.

    • AstralPath@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 hours ago

      You hate to say it because you know this is a ridiculous take. There’s no fucking way that the parents are “more at fault” for their son’s death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

      Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

      *I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil’s advocate online.

    • benignintervention@lemmy.world
      link
      fedilink
      English
      arrow-up
      98
      arrow-down
      3
      ·
      1 day ago

      Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn’t necessarily absolve the parents, but it does add a layer of nuance to the discussion

    • Sanctus@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      1 day ago

      I agree, but a chatbot still shouldn’t help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      2
      ·
      1 day ago

      It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

      The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

      It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

      Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      3
      ·
      edit-2
      1 day ago

      I definitely do not agree.

      While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

      What regulations are in place to help with this? What tools for parents? Isn’t this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

      How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

      This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

      This is not the parent’s fault and seeing so many people declare it just feels like apoligist AI hype.

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        1 day ago

        I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

        As for your question: I won’t blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I’ll try to keep it general:

        Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

        That’s independent of the technology.

        This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.

        Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.

        • Spuddlesv2@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          20 hours ago

          They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?

          • Scipitie@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            In the very first post on this thread I pointed out that I’m not talking about this specific case at all.

            • Spuddlesv2@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              11 hours ago

              Fair enough but in the post I replied to you did say you won’t blame the parents “here” in the slightest, which to me means “here in this specific case”.

        • audaxdreik@pawb.social
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          1 day ago

          I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

          I think you miss my point. I’m saying that adults, who should be capable of more mature thought and analysis, still fall victim to the manipulative thinking and dark patterns of AI. Meaning that children and teens obviously stand less of a chance.

          Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

          This is of course true for all parents in all situations. What I’m saying is that it is woefully inadequate to deal with the type and pervasiveness of the threat presented by AI in this situation.

          • Scipitie@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            To your last point I fully agree!

            For the first point: that’s how I understood you - what I failed to convey: adultsshould fall victim more in cases like this because parents can be a protective shield of a kind that grown-ups lag.

            Children on their own stand easy less of a chance but are very rarely on their own.

            And to be honest I think it doesn’t change result of requirements for action both in general but respectfully for language based bots, both from a legal as well as an educational point of view.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    31
    ·
    15 hours ago

    Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        5
        ·
        12 hours ago

        Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          9 hours ago

          That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

            The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.

            • ganryuu@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

              About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

              In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

              (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 hour ago

                I hope that’s true, the article doesn’t mention anything about that. I’m just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn’t have a lot going on.

                I definitely agree that the public needs to be more informed about LLMs, I’m just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don’t think providing it more frequently would’ve helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

          • yermaw@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 hours ago

            You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”

            • ganryuu@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              6 hours ago

              I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post