OpenAI is enhancing ChatGPT with new mental health safeguards after criticism regarding its responses to users in emotional distress. Notably, a case involving a man with autism highlights ChatGPT’s shortcomings in addressing delusional thoughts. In response, OpenAI acknowledged the need for improvement, stating, “We don’t always get it right,” and will evolve its approach based on real-world experiences. The updated ChatGPT will better detect signs of mental distress, encourage breaks during lengthy interactions, and steer users toward evidence-based resources. Rather than providing direct answers to high-stakes questions like personal relationship dilemmas, it will ask guiding questions to aid decision-making. OpenAI is also forming an advisory group of mental health experts to inform future updates. While AI chatbots can assist in emotional management, experts emphasize the importance of personal connections with trained professionals for meaningful support. Previous adjustments to ChatGPT were made to ensure more constructive feedback instead of overly agreeable responses.
Source link

Share
Read more