Australia’s internet regulator is poised to enforce age verification for artificial intelligence services, as a review revealed that over half of these platforms failed to comply by the upcoming deadline. This initiative reflects Australia’s proactive stance in regulating AI technology, especially in light of rising concerns about its impact on youth mental health. With more lawsuits emerging against AI companies for promoting self-harm and violence, strict measures are being considered. Following Australia’s December ban on social media for teenagers due to mental health issues, authorities are now targeting AI-generated content. Effective March 9, 2024, platforms like OpenAI’s ChatGPT must prevent users under 18 from accessing harmful content, including pornography and extreme violence, or face hefty fines up to A$49.5 million (US$35 million). The eSafety commissioner signaled that non-compliance could lead to significant repercussions for search engines and app stores providing these services.
Source link
