OpenAI recently disclosed that it monitors user conversations on ChatGPT, reporting interactions deemed threatening to law enforcement. This revelation, buried within a blog post addressing the mental health risks associated with AI, raised concerns about the efficacy of human moderators evaluating chat tone while operating an AI system designed to tackle complex problems. Critics question how OpenAI determines users’ locations, especially regarding potential misuse by ‘swatters.’ Many assert that involving police in mental health crises can exacerbate situations, while the tech industry’s history of surveillance amplifies worries about privacy violations. OpenAI’s policies seem contradictory to its CEO Sam Altman’s earlier assurances of user confidentiality. As the AI industry rushes to market untested technologies, the ramifications on user privacy and security become ever more pronounced. This situation highlights an unsettling trend where user interactions, even in vulnerable moments, are subject to scrutiny.
Source link

Share
Read more