OpenAI, the parent company of ChatGPT, has announced a new teen-friendly version of the chatbot, aimed at prioritizing the safety of young users. CEO Sam Altman outlined plans for an age-prediction system to identify users under 13 and stated that if there’s any uncertainty, the default will be to provide an under-18 experience. In some regions, ID verification may be required, despite privacy concerns. OpenAI intends to implement specific rules to prevent harmful discussions around self-harm, stating that they will reach out to parents or authorities if a minor shows signs of suicidal thoughts. These measures come as the Federal Trade Commission investigates the impacts of AI on youth. OpenAI emphasizes the importance of making ChatGPT safe for everyone, recognizing ongoing concerns about the potential dangers of AI technology for both children and adults. Parents are encouraged to monitor their children’s online activities and engage in safety discussions.
Source link