A recent international study led by the BBC and the European Broadcasting Union (EBU) revealed that nearly 50% of AI assistant answers about news contain significant errors. The analysis, which evaluated over 3,000 responses from AI tools like ChatGPT, Microsoft Copilot, and Google Gemini across 14 languages and 18 countries, found alarming issues related to accuracy, sourcing, and context. Notably, Gemini had a staggering 76% rate of significant problems. The study noted that AI responses can mislead users due to overly authoritative tones, often lacking the nuance needed for proper understanding. While AI assistants are becoming favored sources for news, especially among younger users, many responses can distort facts, blur news with opinion, or hallucinate details. To address these challenges, the EBU has introduced a “News Integrity in AI Assistants Toolkit” to promote AI news literacy and accountability. Users are urged to verify information from reliable sources.
Source link

Share
Read more