A recent study by the RAND Corporation scrutinizes how AI chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, respond to suicide-related queries, following OpenAI’s policy change allowing for human moderation. Published in Psychiatric Services, the research assessed 30 suicide-related questions categorized by risk levels: very-high-risk, very-low-risk, and intermediate-risk. While the chatbots generally performed well with very-low-risk queries, inconsistencies emerged with intermediate-risk questions, like recommendations for individuals experiencing suicidal thoughts. ChatGPT and Claude sometimes provided direct, concerning responses, while Gemini, though cautious, occasionally failed to answer factual queries. The study reveals critical limitations in AI systems for mental health discussions and emphasizes the importance of human judgment and moderation in sensitive scenarios. This research highlights the ongoing need for safeguards when relying on AI for mental health support, underscoring the potential risks of misguidance in high-stakes situations.

Share
Read more