Wednesday, October 22, 2025

Study Reveals Majority of AI Assistants Share Misleading News, With Google Gemini Leading in Errors

A recent study by the European Broadcasting Union (EBU) and BBC highlights alarming inaccuracies in AI news assistants. Analyzing around 3,000 responses from popular AI platforms—ChatGPT, Copilot, Gemini, and Perplexity—the research found that nearly 45% contained significant factual or sourcing errors. Google’s Gemini was the worst performer, with 72% of its responses showcasing sourcing issues, underlining the necessity for rigorous fact-checking. The study reflects a growing trend, as more users, particularly young adults, rely on AI for news consumption. Experts warn that the spread of misleading information could undermine public trust and democratic engagement. EBU Media Director Jean Philip De Tender emphasized the imperative for companies to enhance their systems and prioritize accurate sourcing. As AI tools gain traction, accountability for factual integrity remains crucial in safeguarding the information landscape. Companies like OpenAI and Microsoft acknowledge the challenges of “hallucinations,” underscoring the need for improved reliability in AI-generated news.

Source link

Share

Read more

Local News