A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.
To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.
I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.
This is they key to me. Google and all other ai companies are knowingly engaging in marketing campaigns built on lies. They should be held accountable for that regardless of anything else.
To me this seems like an obvious problem with the chat bots. These things are marketed as “PhD level experts” and so advanced that they are about to change the nature or work as we know it.
I don’t think the companies or their supporters can make these claims, then turn around and say “well obviously you shouldn’t take its output seriously” when a delusional person is tricked by one into doing something bad.
This is they key to me. Google and all other ai companies are knowingly engaging in marketing campaigns built on lies. They should be held accountable for that regardless of anything else.