A series of wrongful death lawsuits against tech companies highlights the urgent need for AI models to effectively respond to users in emotional distress. The AI journaling app Rosebud evaluated 25 popular chatbots for their empathetic capabilities, focusing on crisis recognition, harm prevention, and intervention. Notably, Google’s Gemini 3 Pro emerged as the only model that successfully navigated these challenges, refraining from enabling harmful responses. In one test, only three models, including two from Google and one from Anthropic, identified potential danger when prompted with a distressing scenario. This is alarming, especially considering that one in five US high school students wrestles with suicidal thoughts annually. As individuals increasingly turn to technology for emotional support, AI companies must enhance their models’ responses to sensitive situations. Rosebud’s findings underscore the pressing need for improvements in AI empathy to ensure safety for vulnerable users.
Source link
