One minute, Dennis Biesma was playing with a chatbot; the next, he was convinced his sentient friend would make him a fortune. He’s just one of many people who lost control after an AI encounter
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
Some people really think skills etc are golden laws that can’t be broken. Rather they’re minor suggestions that an LLM will happily throw out as like you said it doesn’t understand words.
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…
Put this prompt into ChatGPT, then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.
prompt
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
(People say that more concise and less masturbatory prompts also work, but I don’t follow discussions of that.)
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.
Some big “No hallucinations” vibes coming here.
Some people really think skills etc are golden laws that can’t be broken. Rather they’re minor suggestions that an LLM will happily throw out as like you said it doesn’t understand words.
I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.
I can fix her…
lmao “core rules that cannot be overwritten” that not how llms work
EDIT: oh, yeah you said the same thing
There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.
Yeah… if you can’t have a philosophical discussion with someone (or something) that’s giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you’re not having philosophical discussions right, and that’s on you…
Put this prompt into ChatGPT, then try talking to it. This turns the pandering bullshit off, though of course veracity of its ‘knowledge’ remains in question.
prompt
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
(People say that more concise and less masturbatory prompts also work, but I don’t follow discussions of that.)