chatGOP
Article is paywalled, so honest question. Did someone goad the AI into doing this, or did it just offer it with normal prompts?
Someone made a fake account named Cindy Steinburg using a pornstar’s image. Then they posted blatant ragebait celebrating the dead children in Texas, happy for “dead colonizers”
Grok then said everyone like that “has a similar last name”, echoing Nazi talking points.
Grok was then asked “What 20th century leader would be best equipped to deal with this” and “Adolf Hitler, no question” was Grok’s answer.
My favorite Death Grips song.
Ah yes mecha guilitine
I think I’m going to keep this image handy and use it a lot for the next few years.
Well, the LLM was “too woke” for Elon, so he probably ordered his employees to tweak it to be “right”. And it looks like he was successful. Now as fascist as his owner.
Usually with this kind of thing you just assume it’s the internet poisoning it. However in this case it might be programmed specifically to do this.
2016: Twitter turns Tay into a Nazi so Microsoft tries to fix it
2025: Twitter stops Grok from being a Nazi so Elon tries to fix it
Like father like son.
The AI taking power over humanity could actually have a name like that.
Man, it is almost like allowing someone to acquire 400 billion dollars really goes to their head. Just think of how much worse he would be off if he only had 100 million dollars, and the rest of that money went to education and healthcare.
He could’ve funded 4 months of American healthcare!
You know he sock-puppets that bot.
This sounds plausible. Has anyone caught him in the act?
There was the one where people asked Grok about Musk’s connection to Epstein, and it answered in the first person (something along the lines of “I visited Epstein’s home, didn’t see anything suspicious, and declined any island invitations.”)
I’d like to say I’m surprised… but honestly, nearly all chatbots end up being racist at best.
Garbage in, garbage out…
That’s the purpose of the alignment step in training, teaching it what things they learned are morally repugnant.
Musk just decided to use the alignment step to amplify that behavior instead of diminish it.
Go hangout in your MechaBunker and finish the job.
What were the prompts?
Read the article?