OpenAI is implementing significant changes to ensure the safety of teenagers using ChatGPT, prioritizing protection from harmful content over user privacy and freedom. The company acknowledges a tension between user freedom, privacy, and safety, with plans to estimate users’ ages through their usage patterns. Users identified as under 18 will automatically access a restricted version of ChatGPT, while uncertain ages will default to treating the individual as a teen. Enhanced guidelines for minors will block sexual content and discussions around suicide or self-harm. In cases of acute mental health distress, OpenAI may contact parents or authorities. Moreover, parents will gain more control, including the ability to link accounts, disable chat history, and set usage limits. These measures, prompted by a recent tragic incident involving a teenager, aim to create a safer online environment by the end of the month.
Source link