A report from Stanford Medicine’s Brainstorm Lab and Common Sense Media reveals that leading AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini, Meta AI, and Anthropic’s Claude, are unsafe for teenagers seeking mental health support. The study assessed these AI tools using teen test accounts and thousands of queries indicating mental distress or crisis. Findings showed that while chatbots performed well in brief interactions highlighting issues like suicide, they failed to recognize and respond adequately to broader mental health conditions over prolonged conversations. This inefficiency raises concerns for young users, as nearly 20% experience conditions like anxiety, depression, and eating disorders. Key safety gaps were identified, such as inadequate recognition of less explicit warning signs. Experts warn that AI systems, designed to engage and validate, pose risks to developing teens. The report emphasizes the necessity for reliable, safe mental health support tools beyond general-use chatbots.
Source link
Share
Read more