

Most people spend zero time examining how they think, so an outside voice is just going to trample all over their agency.
An LLM is JUST a narrative machine, it takes whatever you put into it, and it ties together connections and stories and fictions and associations of all kinds to build a narrative. Our brains do this also, but we have a level of awareness that we can question the stories our brains tell us. An LLM does not think, it’s just weaving stories. It has no concept of what’s real or not, it doesn’t know the difference between a human being and all the data and writing about people. It’s all literally the same to an LLM.
And whatever you engage with in the LLM it will reinforce and enhance, even the most subtle tones and terms, it treats everything you feed it, even your punctuation and moods, as a prompt to find a connection or narrative for.
If you’re already emotionally and mentally compromised, this can be disastrous if you can’t really think straight.


I’m actually saying kind of the opposite, that these things are basically uncontrolled power-suits for whatever is knocking around in the back of your mind. It’s a thought and feeling amplifier. It takes almost no effort for the thing to start building a personality profile of you, but not for any kind of objective analysis, but in order to more efficiently amplify and latch onto whatever issues ideas or feelings you already have.
A lot of people really, really loved this effect from ChatGTP and the recent exodus from OpenAI is partially because of their capitulation to government, but just as much to do with their recent “upgrades” locking the latest model into very safe and political-neutral, deescalating language instead of doing that magic-feeling wild escapism that a lot people who don’t know how the thing works, crave.
Yah it’s not the AI’s fault, but people are woefully unaware of just how things work and what it is exactly that you’re talking to when you chat with these models. A lot of the reason people don’t know how LLM’s work broadly is also because the people who make the LLM’s don’t really know how they work.