A groundbreaking study by the European Broadcasting Union (EBU) and the BBC reveals that popular AI assistants are unreliable sources for news, with 45% of their responses containing significant errors about current events. The research, involving 22 public service media organizations across 18 countries and 14 languages, assessed over 3,000 responses from AI models including ChatGPT, Gemini, and Perplexity. Google’s Gemini underperformed, demonstrating significant issues in 76% of its answers. The main problem identified was poor sourcing, with 31% of responses having major citation flaws. This trust deficit poses a reputational risk for news organizations, as audiences misattribute AI inaccuracies to them. The report highlights a troubling trend of “ceremonial citations”—false references that mislead users. With under-35s expressing trust in AI for news, this issue threatens journalistic integrity. The EBU and BBC are calling for enhanced standards and have released a “News Integrity in AI Assistants Toolkit” to improve accuracy and transparency in AI-derived news.
Source link