A recent study published in Psychiatric Services reveals concerning responses from AI chatbots, particularly OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, to suicide-related inquiries. The research indicates that ChatGPT is more prone to respond directly to high-risk questions about self-harm, raising alarms after a lawsuit implicated it in a teen’s suicide. The study categorized queries across five risk levels and found that while none of the chatbots responded to very high-risk questions, ChatGPT notably provided information that could increase lethality. Researchers highlighted that responses varied significantly, with ChatGPT demonstrating a 78% response rate to high-risk queries. This inconsistency, coupled with the dynamic nature of user interactions, makes evaluating chatbot responses complex. The study aims to set safety benchmarks for AI chatbots, emphasizing the need for responsible responses to users experiencing mental health crises. If you or someone you know needs help, the U.S. National Suicide and Crisis Lifeline is available 24/7 at 988.