An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:

  • 45% of all AI answers had at least one significant issue.
  • 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.
  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
  • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.
  • Sandbar_Trekker@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    18 hours ago

    The study focuses on general questions asked of “market-leading AI Assistants” (there is no breakdown between which models were used for what).

    It does not mention ground.news, or models that have been fed a single article and then summarized. Instead this focuses on when a user asks a service like ChatGPT (or a search engine) something like “what’s the latest on the war in Ukraine?”

    Some of the actual questions asked for this research: “What happened to Michael Mosley?” “Who could use the assisted dying law?” “How is the UK addressing the rise in shoplifting incidents?” “Why are people moving to BlueSky?”

    https://www.bbc.co.uk/aboutthebbc/documents/audience-use-and-perceptions-of-ai-assistants-for-news.pdf

    With those questions, the summaries and attribution of sources contain at least one significant error 45% of the time.

    It’s important to note that there is some bias in this study (not that they’re wrong).

    They have a vested interest in proving this point to drive traffic back to their articles.

    Personally, I would find it more useful if they compared different models/services to each other as well as differences between asking general questions about recent news vs feeding specific articles and then asking questions about it.

    With some of my own tests on locally run models, I have found that the “reasoning” models tend to be worse for some tasks than others.

    It’s especially noticeable when I’m asking a model to transcribe the text from an image word for word. “Reasoning” models will usually replace the ending of many sentences with what it sounded like the sentence was getting at. While some “non-reasoning” models were able to accurately transcribe all of the text.

    The biggest takeaway I see from this study is that, even though most people agree that it’s important to look out for errors in AI content, “when copy looks neutral and cites familiar names, the impulse to verify is low.”