A new UK law empowers tech companies and child protection agencies to test AI tools for potential creation of child sexual abuse material (CSAM). This legislation comes as reports of AI-generated CSAM surged from 199 in 2024 to 426 in 2025. The law allows experts to examine AI models, like those used in ChatGPT and Google’s Veo 3, under strict conditions to prevent the production of abusive images. Minister for AI, Kanishka Narayan, emphasized the importance of stopping abuse preemptively, as previous measures only addressed CSAM after its upload online.
Additionally, the law bans the creation and possession of AI models designed for generating CSAM. The Internet Watch Foundation highlighted the alarming rise in AI-generated abuse cases, particularly affecting girls. Childline reports indicate increasing instances of online blackmail and abuse involving AI components. The changes aim for safer AI deployment, ensuring the protection of children online.
Source link
