cross-posted from: https://lemmy.ml/post/35349105

Aug. 26, 2025, 7:40 AM EDT
By Angela Yang, Laura Jarrett and Fallon Gallagher

[this is a truly scary incident, which shows the incredible dangers of AI without guardrails.]

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 day ago

    I don’t have any evidence. But everything I read about this story I get the vibe that parents are dodging their responsibility. Some of the logs clearly show that the kid have issues with the family.

    And the fact that they are willing to blame some software instead of themselves speaks quite loudly.

    Suicide in kids usually have two real roots, school or family. Because that’s the two places the kid will spend more time with. And probably only run to “other places” if one or both those fundamental places are awful.

    It’s “videogames are to blame for violence” all over again.

    I know too well how parents behave when they don’t want to assume their own fucking responsibility for how they raise their kids, and this smells too much like that very same shit.

    • Peter Link@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      16 hours ago

      I don’t understand your logic here. Clearly, the kid had problems that were not caused by ChatGPT. And his suicidal thoughts were not started by ChatGPT. But OpenAI acknowledged that the longer the engagement continues the more likely that ChatGPT will go off the rails. Which is what happened here. At first, ChatGPT was giving the standard correct advice about suicide lines, etc. Then it started getting darker, where it was telling the kid to not let his mother know how he was feeling. Then it progressed to actual suicide coaching. So I don’t think the analogy to videogames is correct here.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        7 hours ago

        Take away chatgpt and insert a videogame, movie o bookthat talk about those same topics.

        There are books that talk much darker about suicide. If the kid were to read those the parents would sue the author of the book?

        There is a whole subgenre of music that is about encouraging people to comit suicide and fall into depression, do we use the “who is going to think about the children” card with thar music and its authors? Because music can really get under you skin and a couple of hours listening to that would nake anyone have weird thoughts.

        The shitty parents blame chatgpt because it told the kid how to make a noose. You can kind that info in “howto” with instructable images. Do we put the UK nanny dictatorship controls on “howto” ? Or it only counts of it’s something that benefits of the butlerian yihad?

        I think is completely irrational to blame a piece of software (or media), as much defective as it is, for a suicide.