certified_expert@lemmy.world to Showerthoughts@lemmy.world · 6 hours agoHumans' behavior about LLMs is the same as animals with a mirror: they believe there is "another" in there. It's just their reflexionmessage-squaremessage-square41fedilinkarrow-up1237arrow-down113
arrow-up1224arrow-down1message-squareHumans' behavior about LLMs is the same as animals with a mirror: they believe there is "another" in there. It's just their reflexioncertified_expert@lemmy.world to Showerthoughts@lemmy.world · 6 hours agomessage-square41fedilink
minus-squarecally [he/they]@pawb.sociallinkfedilinkarrow-up9·3 hours agoRelated: is there a name for “question bias”? Like asking ChatGPT if “is x good?”, and it would reply “Yes, x is good.” but if you ask “is x bad?” it would reply “Yes, x is bad, you’re right.”
minus-squareyeahiknow3@lemmy.dbzer0.comlinkfedilinkarrow-up3·edit-21 hour agoIt is not a leading question. The answer just happens to be meaningless. Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.
Related: is there a name for “question bias”?
Like asking ChatGPT if “is x good?”, and it would reply “Yes, x is good.” but if you ask “is x bad?” it would reply “Yes, x is bad, you’re right.”
It’s just a leading question.
It is not a leading question. The answer just happens to be meaningless.
Asking whether something is good is the vast majority of human concern. Most of our rational activity is fundamentally evaluative.