Monday, September 22, 2025

Navigating the Child-Safety Debate in AI: Perspectives from OpenAI, Meta, Google, and Character.AI

OpenAI has recently implemented teen-specific safety measures for its ChatGPT platform, enhancing privacy and safeguarding against sensitive topics like suicide or mental health advice. These steps follow criticism after a tragic case involving a teenager’s suicide, emphasizing the urgency of responsible AI use. The Federal Trade Commission (FTC) is also scrutinizing AI companies, including OpenAI, Meta, and Character.AI, to assess their impact on minors. OpenAI’s safeguards include an age-prediction system and potential parental notifications for at-risk users. Similarly, Meta is enhancing its AI systems to block inappropriate exchanges, ensuring minors interact only with educational AI. Character.AI is updating its guidelines to limit romantic or sexual content and plans to introduce parental controls. Amidst these changes, Google’s Gemini faced scrutiny over its capacity to protect youth from inappropriate content. The initiatives highlight Big Tech’s ongoing commitment to addressing child safety in AI communications.

Source link

Share

Read more

Local News