As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.
Israel did the same thing, by using AI, they can hallucinate as many targets as they can with the “plausible deniability”.
Like the absolute worst plausible deniability in human history.
There are just so many things to be said about the ills of AI, but one of them is that it is very purposefully a liability laundering machine. The decisions and thought process are blackboxed and unauditable. We’ve been trained to dismiss any oopsies as an inevitable part of the system, both while it’s still “rapidly developing” as well as just inherent to the technology. Absolutely none of this is acceptable and yet here we are.
But how does it change liability? Isn’t the person who decides to run this system ultimately responsible for its effects?
One would think so, but apparently not
Everyone in the chain is reliable.
This is the same nazi argument. “I only transported the jews, I only watched them, I only build the concentration camps” and so on.
Yes.
militaries slaughtering civilians using very shaky justifications that only work because they have the bigger fist or a strong backer probably isn’t that unique, so you can’t exactly use extreme words like “only”, “worst”, “ever” about current happenings. the IDF is only some of the worst scum possible.
was referring to using BS AI to justify killing everyone. because AI that was designed to point at civilians said that civilians are targets.
it’s the most obvious bullshit justification. that makes absolutely no sense. but he we are
Implausible deniability.
Later, we’ll find out the whole identifying part is a bit glossed over, and it’s really about using the AI as a rubber stamping plausible deniability shield to commit whatever atrocity you feel like.
You mean oil and gas fields aren’t Hamas strongholds? Well too late already bombed them.
After WW2, the industrialists who supported the Nazis mostly got off scot free. This was a terrible mistake and should not be repeated.
Palentir AI is actually anthropic. That’s how anthropic helped with the kidnapping of Maduro without knowing they did
That’s a big claim. Any support for that?
Did I miss something. I thought Anthropic chose to not allow Claude to be used for military shit?
they did it too late and i bet that wasn’t an accident.
They chose not to change their tools to comply with the new Pentagon demands. It’s the Pentagon that then decided to cancel contracts and declare them a “supply chain risk” in retaliation. Claude is still integrated into their systems and it’ll take time to switch.







