Ok so given the guy tripped and fell and died(stupid headline),
There still lies the risk of the lure which has much potential to be dangerous for the vulnerable getting lured out like this and exposed nonetheless
Didn’t he slip and fall on his way to the train or something?
It’s a shity headline, but her was still there to slip because he got catfished by AI
Let’s not talk about accusing Meta of murder for a minute, but can we at least agree that a “flirty chatbot modeled on Kendall Jenner” that insists you should meet in real life and gives you a real address does not sound like a great idea?
deleted by creator
Bue hurried through the dark with a roller bag to catch a train to her apartment. He tripped near a Rutgers University parking lot, suffering head and neck injuries. He died three days later, on March 28.
Click-bait crazy headline.
Yeah, the headline does make it sound like someone used it to lure him out and murder him.
Cashing in on anti-AI hysteria for clicks.
I guess you can say that but you can also make the argument that he was only traveling in the dark, in a rush because he was being invited over by this not pretending to be a real person. Elderly person yearning for companionship may get over excited at an opportunity like this.
I’m going to hazard a guess that this was more because US infrastructure is unsafe for people with mobility issues. Because pedestrians are second-class citizens compared to motorists.
People should be safe walking around where they live. If they aren’t, something is wrong. And that something is not the desire to walk places, even if it was caused by a weird scambot.
Is that the daughter laughing in the thumbnail?
The AI didn’t make him trip…
But it did convince a cognitively impaired 76 year-old that was a good idea to go romping about in the dark.
Ok. So could a normal person on the internet or with a phone or stamps.
And gave a live address??
He could have tripped in his bathroom where a lot of people die, not just the elderly. Such things happen no matter how much you’re trying to make environments safe. I’m not trying to defend AI here (I would be the last one to do so).
This is so stupid, you can say this about literally any death ever. The bot directly led to the situation that killed him in this instance.
Also, no reasonable judge would hold Meta liable.
This is a text book example of a proximate cause argument.
Maybe not criminally but maybe in civil court
Correlation does not imply causation.
Removed by mod
Facepalm
Thats the correct response to everything you post yeah
Yeah this is a dumb take. If I lured an elderly person down a dark shaft with the promise of something and then he got lost / died / tripped in the dark and couldn’t get help I would be charged with at least endangerment.
Good luck with trying to create a world that is completely safe because it’s not possible.
The old man had cognitive decline and a robot told him to leave his house multiple times… The world can’t be bubblewrapped argument shouldn’t be used here; there are better places.
Yeah this is a dumb take. If I lured an elderly person down a dark shaft with the promise of something and then he got lost / died / tripped in the dark and couldn’t get help I would be charged with at least endangerment.
Except that’s not what happened here. To use your hypothetical: You would have convinced the person to go to the dark shaft, but on the way to you he tripped on the stairs at a regularly used and maintained subway platform and died, you would NOT be charged with endangerment. He hadn’t gotten to the dangerous place yet where you were creating dangerous conditions.
This is an okay counter. I would still make the argument that he wouldn’t have left the house under normal circumstances and thus meta should be liable to some degree
So if your friends talk you into coming to their place and you trip coming out of your house, your friend should be charged?
Does your opinion change if it was a real person instead of an AI?
Listen guys I’m not limit testing my stance on 100 different hypotheticals. I agree there are cases where my example doesn’t apply and there are some situations you can present where I would change my opinion.
The fact is an elderly man with cognitive issues was lured out of his home to meet an AI that should not be presenting itself as “real” or having a real address to travel to to meet up. I posit that this old man would have been resting at home if Metas AI wasn’t continuously asking him to come over. The article states the old man didn’t initiate intimate talk at all, the AI did, and never asked to meet, that was also the AIs doing.
Even if he didn’t die on the bus, what would have happened if he showed up to that address? Who lives there? What time was it; is he knocking on some random door in the dark?
If this dude had dementia, I’d be as pissed as the family is.
Lets test your logic some more:
If a person got in their car to drive to their drug dealer to buy drugs (a crime in most places), and he got in a car accident with an unrelated driver, then died, wouldn’t your logic say the drug dealer should be charged with having some culpability in the driver’s death?
Do you believe that is the law currently? Do you believe the drug dealer should be charged with a portion of the responsibility of the death because the driver wouldn’t have left the house this time unless he wanted to buy drugs?
No of course not, but that’s not perfectly analgous because the person purchasing drugs initiated it and went on their own accord… This is an elderly man with cognitive decline… Idk about you but I’m picturing a person with early dementia being led out of the house by Meta’s robot…
I think assuming all of the subway is well maintained is your flawed assumption. Had more than a few trip and fall cases that were actually the city’s liability.
In that situation, the liability would be on the city, NOT the person initiating the meeting where the villain would have chosen to set up the meeting which is what @Sanguine@lemmy.dbzer0.com is proposing.
trip and fall cases that were actually the city’s liability
i didn’t think i’d need to emphasize that but here we are. infrastructure doesn’t get properly maintained until someone gets hurt on it. it’s statesia. If you see a broken curb, take a photo (and let public works know) because if it’s still broken after 6 months all the injuries there are the city’s fault (your state’s laws may vary). i have this curb I’ve been taking photos of and bothering one town over’s public works about for five years now.
I’m…not sure why you’re chasing down this rabbit hole of who’s liability it would be when we both agree for the context of our conversation that the person NOT liable would be the hypothetical villain that is trying to lure the person into a trap. Who’s liability it would be outside of that villain is immaterial to the discussion.
How can you be sure?
We aren’t at the stage where AI-controlled robots run around and make people die by “accident”… yet.
We are however at the stage where law enforcement has used a remotely controlled robot to kill people though. Almost a decade ago even!
Autonomous weapons are already in development and likely have been used on the battlefield already, and now there’s discussions to ban their use entirely so potentially we might be pretty dang close to that future already
We will get there for sure. At worst in ten, at best in thirty years. People will get used to neighbors and coworkers getting neutralized fast. Humans have always been good at adapting.
Woosh!
Happens to the best of us
Removed by mod
Wondering if the two Reddit-tier dipshits
Says the person bringing Reddit-level negativity to into Lemmy with a 1 day old account.
Removed by mod