Artificial intelligence tools designed to deliver news information frequently provide inaccurate or poorly sourced responses, according to a landmark study by the European Broadcasting Union and BBC. Researchers analyzed 3,000 news-related answers from major AI assistants like ChatGPT, Gemini, and Copilot across 14 languages, finding 45% contained significant errors and 81% showed some form of misinformation.
The report highlights critical gaps in AI news reliability:
- 33% of responses featured serious sourcing issues, including fabricated citations
- 20% contained factual inaccuracies like outdated information
- 72% of Gemini's responses showed major attribution problems
Notable errors included Google's Gemini misrepresenting disposable vape legislation and ChatGPT incorrectly reporting papal updates months after Pope Francis' death. With 15% of under-25s using AI for news consumption, researchers warn such errors could erode public trust in digital information sources.
"When people don't know what to trust, they end up trusting nothing at all," said EBU Media Director Jean Philip De Tender, emphasizing the democratic implications. The study urges AI developers to improve transparency and accuracy in news responses as adoption grows.
Reference(s):
New research shows AI assistants make widespread errors about the news
cgtn.com