An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC
New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.
Key findings:
- 45% of all AI answers had at least one significant issue.
- 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
- 20% contained major accuracy issues, including hallucinated details and outdated information.
- Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
- Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.


20 years ago if a newspaper had factual issues in 45% of their stories we would’ve called it a tabloid and made fun of people who took it seriously
Thanks. Now I’m gonna start calling AI news summaries “tAIbloids” and make fun of the people who use them. 😆
yes, but the problem is that those newspapers have chosen the majority of the world’s leaders for the past … at least 10 years.