The Australian government is enhancing online safety laws to combat the rising threat of AI-facilitated child sexual abuse, specifically targeting deepfake technologies used to create non-consensual sexual content. Communications Minister Anika Wells emphasized the urgent need for this legislation, stating that while AI has legitimate uses, it must not facilitate harm, particularly to children. Law enforcement agencies report being overwhelmed by online child abuse material, with AI contributing significantly to the issue. Proposed amendments include criminalizing the possession of deepfake generation tools and defining enhanced penalties for violations. The Albanese government aims to work alongside tech companies to enforce these new restrictions effectively. Advocacy groups, such as DIGI, support these initiatives, highlighting the need for collaborative approaches to mitigate online harm. Major platforms like Meta are also actively responding to deepfake abuse, reinforcing their policies against non-consensual imagery.
Source link