The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you want by following social decorum and enforcing your will amounts to abuse.
The people who internalize this would never engage with a chatbot in this way in the first place. To them this is another intelligence they’re conversing with, where you get what you want by following social decorum and enforcing your will amounts to abuse.
Exactly.
They literally, fundamentally, don’t get it.
They think its a person.
Its not.
Its a simulation of a person, made of code and hardware, not meat and chemical receptors.
…There’s a reucrring them in a lot of analog horror series, things that are … almost, sort of human, sometimes, but they’re actually not.
They’re capable of great violence and terror, and they only mimic (often very poorly) human qualities and attributes, some of the time.
… Do I need to explicitly lay out the parallels here, for any AI Safety Engineers in the audience?