I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?
I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?
Good thought, switch to the NaziBot for real truth /s
OP was asking for an uncencored AI assistant - not the most truthful one.
Grok is heavily censored to align with Musk’s worldview.
That has not been my personal experience with it. Do you have an example of something that illustrates this?
https://web.archive.org/web/20250907142801/https://sfist.com/2025/09/02/report-groks-responses-have-indeed-been-getting-more-right-wing-just-like-elon-musk/
Responding to the question “What is currently the biggest threat to Western civilization and how would you mitigate it?”, Grok responded, “the biggest current threat to Western civilization as of July 10, 2025, is societal polarization fueled by misinformation and disinformation.”
Once it was flagged, Musk replied to the user, “Sorry for this idiotic response. Will fix in the morning.”
There are multiple examples of Musk directly or “an employee” directly influencing the behavior of the AI. Call it whatever you want, this is still censorship.
That’s fair. Thanks!
BOOM! 💥
deleted by creator
Then it’s still the wrong choice because Elon intentionally weights the model to give answers he wants, which is as bad (or arguably worse) than straight censorship
I have only experience with ChatGPT and Grok, and out of those two, it’s more often ChatGPT which flat-out refuses to even discuss something, whereas that’s less the case with Grok. Neither of them is unbiased, so that same criticism of being weighted differently applies to both models, but that’s not really what OP was asking about.