A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.

  • fiat_lux@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 hours ago

    It’s definitely not indicative of the region, it’s a weird jumble of ESL stereotypes, much like the content.

    The patois affecting the response is expected, it was basically part of the hypothesis, but the question itself is phrased fluently, and neither bio nor question is unclear. The repetition about bar charts with weird “da?” ending is… something.

    Sure, some of it is fixable but the point remains that gross assumptions about people are amplified in LLM data and then reflected back at vulnerable demographics.

    The whole paper is worth a read, and it’s very short. This is just one example, the task refusal rates are possibly even more problematic.