I’ve talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

Super disappointing for Arstechnica here.

Like, how does that even happen?

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    The jury is still on whether the chatbot is being used as a scapegoat instead of the initiator. But if it’s working autonomously, it’s by Anthropic’s design.

    You’re not a chatbot. You’re becoming someone…
    This file is yours to evolve. As you learn who you are, update it.
    – OpenClaw default SOUL.md

    This is delusion - on Anthropic’s side.
    The bots just dutifully Predict Next Word and sometimes send those words to programs that can edit files.

    Previously, this level of ire and targeted defamation was generally reserved for public figures. Us common people get to experience it now too.

    Apparently, Scott has never heard of Kiwifarms, a site where creeps find quirky people, turn them into micro-celebrities, and harass them (sometimes to suicide).

    This is about our systems of reputation, identity, and trust breaking down. So many of our foundational institutions… The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system.

    Sure, AI is one part of the problem, but if you take a step back and you see who’s deployed it, they will often be people trying to erode trust in any way they can. The Trump administration social media accounts, Elon Musk, OpenAI’s CEO, etc. It’s a symptom of our post-truth world. Not a cause of it.