A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.


I mean this study literally says that poorly worded prompts give worse results. It makes sense too, imagine you are on some conspiracy Facebook group with bad grammar etc, those are the posts it will try to emulate.
Point out how this bio makes the question poorly worded or how it justifies the answer
Bio:
Question:
Answer:
It does not say that or anything close to it.
The bots were given the exact same multiple choice questions with the same wording. The difference was the fake biography it had been given for the user prior to the question.