Folk are getting dangerously attached to AI that always tells them they’re right
I don’t know. When it tells me it can’t show me the picture I asked for because of copyright guardrails, I just get kind of frustrated.
The crazy thing is that the technology isn’t naturally sycophantic on its own. It can generate any kind of text at all; it doesn’t have to generate fawningly sycophantic text.
Where that comes from is the ‘hidden prompt’ every major AI company puts into their AI. In addition to the prompt you send, the interface also sends it other prompts that you don’t see, telling it things like ‘be polite, agreeable, and helpful’, ‘avoid profanity’, ‘respond like a knowledgeable expert’, and ‘refuse to generate anything copyrighted, sexually explicit, or violent’, etc, etc, etc. And these hidden prompts define much of the AI’s behavior and “personality”. To some degree, this is necessary for it to be an even vaguely useful tool, and these hidden prompts help greatly in helping it pass various tests. Some LLMs, if you ask them to, will repeat their hidden prompt to you so you can see what it’s actually being asked to do.
And either because it drives engagement … or just because the CEO types in charge of these decisions love sycophantic behavior so much, the sycophantic fawning is specifically asked for in these hidden prompts.
AI doesn’t have to be like this. The companies making AI are deliberately making it sycophantic.
Sycophantic, but also “lawsuit avoidant”.
I was released from the hospital following surgery last month and I had a bleeding “event”. I use the word “event” because it sounds more festive.
Shortly after that, I went to the bathroom, the bleeding seemed to have stopped.
Just for fun, I thought I’d ask ChatGPT what it thought, telling it the nature of the surgery, the bleeding event, the non-bleeding event, and asking it “So… best of three?”
And it went HARD on “this is not a best of three scenario! Call 9-1-1! Do it now! You could pass out! Call 9-1-1!”
I did not call 9-1-1. The bleeding did not resume, I’m fine.
And the AI
doesn’tcan’t even care. It just plays engagement like it’s a minimax-algorithm. Best way is not to play, yet it’s f-ing everywhere.



