• I_Has_A_Hat@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    28
    ·
    7 hours ago

    There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

    I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren’t that far from the edge to begin with.

    • Bassman27@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      7 hours ago

      So someone who already has an underlying mental health condition diagnosed or not is at fault for their own death even if being coerced into doing it?

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        Google, of all companies, probably has a better psychological profile of their users than the average doctor. They even offer a public-facing option to disable ads about gambling, alcohol, or pregnancy.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            People who don’t want their family getting suspicious, perhaps. The Target Incident comes to mind.

            Of course, disabling these options doesn’t mean Google stops knowing about mental or physical issues. I’m sure you know the best way to prevent that is to just avoid Google and add some together. This is probably just Google’s way of looking less creepy to the average person.

      • SalamenceFury@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        Here’s the thing, it’s usually normies with no history of mental illness that fall into this kind of stuff. Most of my friends and people I follow on social media who are neurodivergent did experiment with chatbots and they saw a fuckton of red flags on the manner they work and alerted everyone about it, if they didn’t hate it already for essentially stealing artistic output (which in my case was both).

      • I_Has_A_Hat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        4 hours ago

        In 1980, John Lennon was shot by a mentally ill man who was convinced to kill Lennon by reading Catcher in the Rye. If he had never read Catcher in the Rye, he most likely wouldn’t have killed John Lennon.

        But it is not the fault of Catcher in the Rye. We don’t ban the book, or call the author irresponsible for writing it, because we recognize that the fault lies in the mental illness of the shooter, and that anything could have set him off.

        The people who kill themselves because an AI Chatbot told them to are mentally ill. It is their mental illness that killed them, not the chatbot. You can make the claim that if it wasn’t for the chatbot, they wouldn’t have gone through with it, but again, you can say the same thing about Catcher in the Rye. Getting rid of the trigger does not remove the mental illness.

      • iegod@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        6 hours ago

        It’s not the car manufacturer’s responsibility to guarantee a drunk driver doesn’t plow into others.

        Vulnerable people don’t get to outsource responsibility.

        • Bassman27@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          5 hours ago

          Here’s the thing, there are no safeguards on who can and cannot use ai. There are safeguards to prevent death by drink driving.

          Drink driving is illegal. It still happens but it’s against the law. It’s a deterrent to stop people from driving while intoxicated. I guarantee that if drunk driving were legal there would be exponentially more deaths.

          Ai is being shoved down everyone’s throats on a day to day basis. There are no safeguards, even kids can use it.

          Vulnerable people are victims of big tech for profit.

          You argument is poor

    • JollyG@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 hours ago

      There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

      To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.

      I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.

      • newtraditionalists@kbin.melroy.org
        link
        fedilink
        arrow-up
        5
        ·
        6 hours ago

        This is they key to me. Google and all other ai companies are knowingly engaging in marketing campaigns built on lies. They should be held accountable for that regardless of anything else.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      When people encourage others to murder by feeding delusion they can be held accountable.

      Why are you blaming the person with mental issues and not even considering holding the for profit company who made a machine that encourages their delusions accountable?