OpenAI is updating ChatGPT to discourage users from relying on it for emotional support. Starting Monday, the chatbot will encourage breaks during lengthy conversations and shift away from offering direct advice on personal issues. This change aims to help users think critically by asking guiding questions rather than reinforcing unhealthy behaviors. OpenAI’s announcement highlighted that some past iterations failed to recognize signs of emotional distress, prompting the development of tools to better identify these situations and direct users to appropriate resources.
The company has collaborated with over 90 medical professionals globally to enhance ChatGPT’s responses in sensitive contexts. Also, an advisory group focusing on mental health and user safety is being established. OpenAI’s CEO, Sam Altman, emphasized concerns about privacy in conversations with AI, echoing differences from human therapists. As ChatGPT anticipates the release of GPT-5, the focus remains on user satisfaction rather than mere engagement metrics.
Source link