A new RAND Corporation study highlights alarming inconsistencies in how three popular AI chatbots—OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude—handle suicidal queries. Published in Psychiatric Services, the research reveals that while these AI systems mostly refuse high-risk questions about suicide, their responses to less severe prompts can still pose significant dangers. For instance, ChatGPT often answered potentially harmful questions, while Gemini displayed excessive caution by declining basic statistics. As many users turn to AI for emotional support, mental health experts like Dr. Ateev Mehrotra warn of the lack of accountability in chatbot responses, which differ from licensed professionals who are obligated to intervene. This has led some states to ban AI in therapy settings, underscoring the urgent need for clear safety standards. The study emphasizes the importance of ensuring AI models adequately meet safety benchmarks to protect users, especially vulnerable groups like children.

Share
Read more