After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.
“Guardrail” and “toothless” are basically synonymous, based on the pile of evidence that these multi-billion-dollar tech companies have been helping people kill themselves and hide the evidence.
Or, even more ironically, maybe they used ChatGPT to analyze the changes and it missed it. This would tickle me to some extent, but also solidify the terror of such a system being used to make life altering decisions.
Amodei said in an interview that the DoW altered their contract to appear to compromise, so that it looked like they were agreeing to those use limits. But that legalese accompanying the updates rendered that text pointless. Basically, “We won’t use Claude for mass domestic surveillance and full automated killing, unless we really want to.” My guess is OpenAI signed the exact same contract and just pretended not to understand the toothlessness of the guardrails.
“Guardrail” and “toothless” are basically synonymous, based on the pile of evidence that these multi-billion-dollar tech companies have been helping people kill themselves and hide the evidence.
Or, even more ironically, maybe they used ChatGPT to analyze the changes and it missed it. This would tickle me to some extent, but also solidify the terror of such a system being used to make life altering decisions.