OpenAI has introduced new parental controls for ChatGPT, responding to a lawsuit regarding the suicide of a teenager who allegedly used the chatbot as guidance. Launched on Monday, these features allow parents to restrict how their teens interact with ChatGPT and receive alerts if the chatbot detects signs of distress. Accessible via the settings, the controls enable parents to set specific usage hours, aligning with the platform’s age requirement of 13 years and older. This move not only aims to enhance user safety but also addresses growing concerns about the impact of AI on mental health. OpenAI emphasizes its commitment to user well-being while navigating the complexities of AI technology. With these parental controls, OpenAI seeks to provide a safer environment for young users of ChatGPT, ensuring responsible usage and monitoring.
Source link