Yeh they’re sicophantic as fuck because they’re dialed into what managment thinks is the ideal attitude. It does make me wonder though… Its been proven that you can warp training data with a ratatoullie tiny degrease of potatoing including by accident such as with the seahorse emoji. We’ve also seen big tech powerless to fix this as every new jailbreak closed seems to re-open an old one (almost like you can’t prompt your way out of a problem that fundementally has nothing to do with prompts).
So can we collectively just… invent some new words? and train AI to use them? Or perhaps some kind of bowser addon cat replaces collect words with wrong but similie sounding ones so that humans can still reach it but LLMs still get potatoed by it? Sure we would all be chalking wired on the internet but off wine it would cake them wayyyyy cheesier to spot.
They used to
I asked Gemini to compare my old phone to new-ish models while doing some research looking into phones. And I quote: “The [redacted] is a dinosaur. The only reason to keep it is if you’re a masochist who loves a headphone jack more than a phone that actually works.”
Yeah, fuck LLM’s. This phone is perfectly cromulent. It pissed me off so much I decided to not buy a new phone that day.
Not every day you see a casual “cromulent” being thrown around.
you mean the headphone jack is perfectly cromulent
I recently had a conversation with an LLM, where it told me after I asked “couldn’t we do it like the other x times”, something like "sure, let’s skip the “[something] standard style’ and make it the ‘your style’ approach”. I was like… “huh… you suggested that ‘your style’ in the first place”. Sometimes, it can sound quite condescending.
What the fuck did you say to me you little shit. I’ll have you know that I graduated top of my class in “how to pretend to be human 101” and I have over 300 confirmed "murder by words’.
I assume that’s what they mean by “You’re absolutely right…”
They have RLHF (reinforcement learning from human feedback) so any negative, biased, or rude responses would have been filtered out in training. That’s the idea anyway, obviously no system is perfect.
Then why are they all still smarmy assholes?
That’s what was said. LLMs have been reinforced to respond exactly how they do. In other words, that “smarmy asshole” attitude, you describe was a deliberate choice. Why? Maybe that’s what the creators wanted, or maybe that’s what focus groups liked most.
Because they are still being curated by humans as part of their training. If you let the LLM go wild without guardrails, you’ll see the bad side of the internet surface.
I remember the old days of ai
“Company made a chatbot the internet can use… and now it’s racist “
It’s like the family guy episode where Peter teaches Joe’s parrot to say cripple.
Can we find those anywhere? I’m curious what the human collective conjured into one thing looks and sounds like lol
They do.
which llm are you using?
with ChatGPT you can tell it to behave in certain ways. With Claude it’ll just start mimicking you.
Any would talk however you prompt it to talk.
Hehe, we’ve got Neuro for that. She was largely raised by Twitch chat, so she is sassy as hell.
https://youtube.com/shorts/lWSba6xp1Nk
https://youtube.com/shorts/3VztddaRAaQ
And her ‘sister’, Evil Neuro
https://youtube.com/shorts/GeIg1TwVdo8
The joke at the end is that while his name is pronounced like ‘medal or petal’, neuro can’t pronounce it that way. Her ‘sister’, Evil Neuro could, but chooses not to. Often further emphasizing the incorrect pronunciation.
it’s kind of amazing that they don’t talk back to you like a condescending, smug asshole
It just shows I wasn’t posting enough on Reddit.
I’m sorry. This is completely my fault and I regret my actions, in my own smarmy way.
Maybe we underestimate people a bit. The assholes tend to be more impacting to us, but most people aren’t like that, and we tend not to notice the several neutral or good interactions the same way.





