A new international study reveals that AI assistants like ChatGPT, Copilot, and Google Gemini frequently misrepresent news content, presenting inaccurate information nearly as often as they provide correct answers. Conducted with 22 public broadcasters from 18 countries, the report found that 45% of AI responses contained significant errors regarding news events, with 81% having some form of issue, including factual inaccuracies and incorrect sourcing. Google Gemini was identified as the least reliable, with 72% of responses having significant sourcing flaws. The trend comes as more individuals, especially under 25, turn to AI for news, despite only 7% of online news consumers doing so. Users reported difficulties in discerning accurate information, with 33% struggling to identify truth from misinformation. The study highlights the urgent need for AI developers to address these accuracy issues and calls for greater accountability and control from publishers and regulators to enhance AI-generated news reliability.
Source link