“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”
“Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear…”
Why so polite?
My response would be:
Complete sentences for a bot is overkill
send docs, idiot
IIRC there was also a study or something done that said something to the effect of being rude to chatbots affects you outside of chatbots and carries into other parts of your work.
Probably because everyone else is a poorly written chatbot
Really? Is that the same for other inanimate objects like appliances? Or are people anthropomorphizing chatbots?
I think it’s because it’s the idea if you’re comfortable being rude to chatbots and you’re used to typing rude things to chatbots, it makes it much easier for it to accidentally slip out during real conversations too. Something like that, not really as much as it being about anthropomorphizing anything.
It’s really hard to say if it’s AI causing these feelings of rudeness, I have been getting more pessimistic about society for the last 10 years.
Makes sense.
For what it’s worth, I’m not suggesting anyone use rude language or anything, just be direct.
Sometimes, I’m inclined to swear at it, but I try to be professional on work machines with the assumption I’m being monitored in one way or another. I’m planning to try some self-hosted models at some point and will happy use more colorful language in that case, especially if I can delete it should it become vengeful.
deleted by creator