Australia’s internet regulator is poised to enforce strict age verification for AI services, threatening to block platforms that fail to comply. By March 9, search engines and app stores, including OpenAI’s ChatGPT, must restrict access for users under 18 to harmful content such as pornography and self-harm, or face hefty fines up to A$49.5 million ($35 million). This initiative follows Australia’s groundbreaking December ban on social media for teenagers due to mental health concerns, with leaders worldwide rallying for similar regulations. A recent Reuters review indicated that over half of popular AI platforms have not yet implemented age assurance measures. With concerns about AI’s impact on youth mental health and behavioral risks, the eSafety Commissioner emphasized a commitment to enforcing compliance. Only a few services, like Character.AI, have introduced preliminary restrictions, while the majority remain unregulated, raising alarms about the potential dangers of unchecked AI technology for young users.
Source link
