OpenAI has implemented new safeguards for users under 18 on ChatGPT, following safety concerns and a lawsuit linked to a California teen’s suicide. Key updates include banning flirtatious chats with minors and enhancing mental health support protocols. The chatbot can now alert parents and, in critical situations, local authorities if a minor discusses suicidal thoughts, providing a safety net for at-risk youth. Parental controls allow families to monitor interactions and set blackout hours, ensuring safer usage. Despite the growing popularity of AI in mental health, with its market reaching $1.13 billion in 2023, experts stress that proactive safety measures are lacking. Studies reveal that many AI chatbots struggle to detect suicidal intentions, prompting warnings from the American Psychological Association about the risks of unregulated AI. Increased scrutiny from state attorneys general reflects the urgent need for responsible AI practices, particularly in protecting vulnerable populations.
Source link