A recent study by the European Broadcasting Union (EBU), involving 22 public broadcasters across 18 countries, reveals significant shortcomings in AI assistants like ChatGPT, Copilot, Gemini, and Perplexity in delivering accurate news content. Journalists evaluated over 3,000 AI-generated responses, uncovering that 45% contained serious issues. A staggering 31% misrepresented sources, and 20% presented inaccuracies, including misleading “hallucinations.” Gemini performed the worst, with 76% of its responses deemed problematic. As AI increasingly supplants traditional search engines—7% of online news consumers now use AI for news, rising to 15% among those under 25—trust in information becomes critical. Peter Archer from BBC Generative AI emphasizes the need for trustworthy content. The EBU has also launched a News Integrity Toolkit to enhance AI responses and address existing issues. They aim to persuade EU and national regulators to enforce compliance with laws on information integrity, digital services, and media pluralism.
Source link

Share
Read more