After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
Anthropic being scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was human just like them, praising the Trump administration, making up scary stories to get more funding…
…In many ways, they’re worse than OpenAI. They’re just running with the same playbook that Sam Altman used to use to pretend he was a good guy.
When users confronted Clinton with their concerns, he brushed them off, said he would not submit to mob rule, and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group.
Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.
Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was “secure” when it was not.
I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?
Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every release describing how scary their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)
I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.
Anthropic still is scum for being completely fine helping America oppress the rest of the world.
Anthropic being scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was human just like them, praising the Trump administration, making up scary stories to get more funding…
…In many ways, they’re worse than OpenAI. They’re just running with the same playbook that Sam Altman used to use to pretend he was a good guy.
They insisted Claude was human?
Sorry, not quite, but close. From 404 media
Oh, that guy! To be fair, that’s one employee, not Anthropic’s actions or position. You mentioned forcing their software on minorities while insisting it was better than it was, and I was getting OLPC flashbacks. But Anthropic looking for funding in the UAE and Qatar is shitty. I can’t seem to find anything about whether or not they went through with those contracts.
Jason Clinton is Anthropic’s Deputy Chief Information Security Officer. That means Jason knew better, and he was using his position as a moderator (and supposedly a security expert) to try gaslighting a vulnerable minority into believing his favorite toy was “secure” when it was not.
I mean, I’m not gonna defend him. But fucking up a discord that you’re a mod of isn’t really in the same ballpark as taking money from dictators or directing fully autonomous strikes. Also, from the read, it really sounds like that Deputy CISO was a prime example of cyber-psychosis, or AI mania, or whatever we’ve decided to call it. And I assume he is part of the same vulnerable minority?
Every example we have of Anthropic’s behavior paints a picture of an immoral company that pretends to be moral. It’s bad enough that they continue doing harm, but then they dress it up with phrases like “AI Safety” and “Information Security”. (And every release describing how scary their system is, tends to be followed up by a sudden cash infusion from an openly morally bankrupt company like Google or Amazon.)
I reserve zero empathy for the people on the abuser side of an abusive dynamic. Maybe Elon Musk is autistic too. I don’t really care. Only Moloch knows their hearts. I’ll judge them for their actions.