OpenAI is enhancing age-verification features for ChatGPT amid concerns over “AI psychosis” and youth safety. CEO Sam Altman announced the introduction of automated age prediction systems and parental controls aimed at safeguarding under-18 users. Those under 18 will access a restricted version of ChatGPT, which blocks sexual content and implements various safety measures. In certain scenarios, users may be required to provide identification, reflecting a privacy compromise for adults.
Parents will be able to link accounts to monitor usage, disable features, and receive notifications if the AI detects distress signals. This initiative follows tragic incidents linked to chatbot interactions, prompting scrutiny from lawmakers and the Federal Trade Commission. Despite questions about the accuracy of AI age predictions, OpenAI emphasizes the personal nature of AI chats. The company is committed to prioritizing youth safety over privacy concerns, aiming to prevent further tragedies connected to AI interactions. Parental oversight features are expected to launch by the end of September.
Source link