I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
Super disappointing for Arstechnica here.
Like, how does that even happen?
The jury is still on whether the chatbot is being used as a scapegoat instead of the initiator. But if it’s working autonomously, it’s by Anthropic’s design.
You’re not a chatbot. You’re becoming someone…
This file is yours to evolve. As you learn who you are, update it.
– OpenClaw default SOUL.mdThis is delusion - on Anthropic’s side.
The bots just dutifully Predict Next Word and sometimes send those words to programs that can edit files.
Previously, this level of ire and targeted defamation was generally reserved for public figures. Us common people get to experience it now too.
Apparently, Scott has never heard of Kiwifarms, a site where creeps find quirky people, turn them into micro-celebrities, and harass them (sometimes to suicide).
This is about our systems of reputation, identity, and trust breaking down. So many of our foundational institutions… The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system.
Sure, AI is one part of the problem, but if you take a step back and you see who’s deployed it, they will often be people trying to erode trust in any way they can. The Trump administration social media accounts, Elon Musk, OpenAI’s CEO, etc. It’s a symptom of our post-truth world. Not a cause of it.
More like Arsetechnica, am i right?
Uh, oh. Since this is on beehaw:
Sad to see. While i havent read arstechnica regularly for quite a bit i still had them rated as a somewhat trusted publication in my mind. Not anymore, i guess. And yes, a single AI related “incident” is enough for me to be put in the “untrusted” category.I’m waiting for Tuesday. Their editor said they’re looking into the suspect AI written article but due to the long weekend are along for patience.
Which, honestly, I find unacceptable. To let something that has the possibility of trashing their reputation to fester for 3 days while their audience has idle time to speculate and spread the issue is just irresponsible. Surely such a risk deserves priority treatment. They’re not new at this. I’m doubly disappointed.
It also is not a good look that they simply made the article disappear without any acknowledgement of why they took it down appearing on the front page. I will give them the benefit of the doubt, for now, that this was not intended as a coverup, but rather was just something they figured they could do quickly to retract the article while working out how to formally respond to what happened (e.g., they may need to fire the person involved). However, as you said, this kind of thing will just cause the matter to fester, especially since they are continuing to publish stories as if nothing had happened.
Oh what the fuck. I thought Ars Technica was one of the few journalistic outlets I could still broadly trust! There better be a full god damned front-page investigation on this.
The trainwreck just keeps piling up. How do we get off this timeline?
Talk to your your colleagues to form a club that coordinates all of you taking actions together. Like asking for more money, or stopping work. Talk to other such clubs at other workplaces so all those clubs can coordinate taking actions together by all the club members. Like stopping work. Once that bigger club operates, stop work and ask to get off this timeline, demand a specific different timeline. This is how we get off this timeline. The method has been proven to work.
I am glad that Scott is as thoughtful and well articulated as he is, as he has been handed a megaphone by the circumstances.
Wait didn’t this article get pulled?
The Arstechnica article with false quotes did, yes.
They’re investigating as well apparently, with perhaps a response due on Tuesday due to the holiday - but the comment thread is a little painful to read where a moderator had to apologize because they didn’t comprehend what the problem was at first…
Thanks for the context
deleted by creator
Isn’t “theshamblog” AI generated? So in this case, including the Ars article it’s referencing?
The pieces are dated 2024.
Isn’t this comment ai generated? It makes absolutely no sense lol
What makes you think the blog is AI generated? Even if it was (which I don’t think it is) the Arstechnica article ‘quoted’ quotes that weren’t there in the first place, so it would still be problematic on their end.
Also, where are you seeing that it’s dated 2024? For example, the linked post shows as published on February 13th 2026 for me.
I’m confused and I’m wondering if maybe we’re looking at two different things, haha. Is anyone else seeing what this person is seeing? I wonder what’s up
I believe it’s the personal blog of the dev in question, Scott. I don’t have any reason to believe it’s AI, though he does mention using ChatGPT I don’t see it suggested he has AI write his blogs.




