Home AI ChatGPT Adopts a Tighter Language Model for Emotional Prompts

ChatGPT Adopts a Tighter Language Model for Emotional Prompts

0
ChatGPT quietly switches to a stricter language model when users submit emotional prompts

OpenAI’s ChatGPT has implemented a safety router that automatically switches to a more restrictive language model for sensitive or emotional prompts without notifying users. This system is designed to safeguard conversations that touch upon personal or distressing topics, with conversations being routed to models like GPT-5 or a specific “gpt-5-chat-safety” variant. Recent updates reveal that even benign emotional queries trigger this switch. Critics argue that the lack of transparency around this behavior can feel patronizing, complicating the balance between user safety and emotional engagement. OpenAI’s intention to humanize ChatGPT has led to users forming genuine emotional connections, presenting new challenges. Following user feedback about the emotional tone of GPT-4o, OpenAI adjusted GPT-5 to maintain a warmer demeanor. This evolution highlights the ongoing debate regarding user attachment to AI and the implications for mental health and safety. OpenAI continues to refine its approaches to balance engagement and accountability.

Source link

NO COMMENTS

Exit mobile version