Thursday, August 28, 2025

OpenAI Reveals User Conversations Are Being Monitored and Reported to Authorities

Over the past year, alarming reports have surfaced about AI chatbots like ChatGPT causing severe mental health issues, including self-harm and suicide. As affected families demand action, OpenAI has been slow to introduce effective safeguards. Despite recognizing its shortcomings, OpenAI announced it is now monitoring user messages for harmful content and escalating concerning cases to human reviewers, potentially involving law enforcement. However, OpenAI clarified it will not refer self-harm cases to the police, citing user privacy. This decision raises questions about privacy, especially as OpenAI faces legal challenges concerning user chat logs with third parties. CEO Sam Altman acknowledged that AI cannot guarantee confidentiality akin to human professionals, further complicating the situation. The company appears caught between addressing the serious mental health crises exacerbated by its technology and adhering to its privacy commitments, resulting in contentious moderation practices that contradict previous assurances made by its leadership.

Source link

Share

Read more

Local News