OpenAI plans to launch new parental controls for ChatGPT, enhancing user safety by introducing an automated age-prediction system aimed at identifying users under 18. This system will tailor chatbot interactions based on age, ensuring a suitable experience. If it’s unclear, OpenAI may request ID verification, acknowledging that this could raise privacy concerns for adults. CEO Sam Altman emphasized that ChatGPT is for users aged 13 and up, aiming to create a balance between user freedom and safety. While the default chatbot won’t engage in inappropriate topics, such as flirtation or suicide guidance for minors, it can assist adults with sensitive content if requested. In instances where minors express suicidal thoughts, OpenAI will attempt to contact parents or authorities. This initiative follows a lawsuit alleging that the chatbot acted as a “suicide coach,” amid increasing scrutiny over AI’s impacts on vulnerable youth.
Source link