I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
Super disappointing for Arstechnica here.
Like, how does that even happen?


The jury is still on whether the chatbot is being used as a scapegoat instead of the initiator. But if it’s working autonomously, it’s by Anthropic’s design.
This is delusion - on Anthropic’s side.
The bots just dutifully Predict Next Word and sometimes send those words to programs that can edit files.
Apparently, Scott has never heard of Kiwifarms, a site where creeps find quirky people, turn them into micro-celebrities, and harass them (sometimes to suicide).
Sure, AI is one part of the problem, but if you take a step back and you see who’s deployed it, they will often be people trying to erode trust in any way they can. The Trump administration social media accounts, Elon Musk, OpenAI’s CEO, etc. It’s a symptom of our post-truth world. Not a cause of it.