This report investigates perspectives on AI-generated child sexual abuse material (CSAM) from educators, platform staff, law enforcement, U.S. legislators, and victims. Through interviews with 52 individuals and document analysis from four school districts, key findings reveal a lack of clarity on the prevalence of student engagement with nudify apps, with schools often failing to address associated risks. Some institutions mishandled incidents involving these apps. While mainstream platforms report discovered CSAM, they do not systematically differentiate AI-generated content in their reports to the National Center for Missing and Exploited Children’s CyberTipline, leaving identification largely to NCMEC and law enforcement. Frontline staff perceive AI-generated CSAM as infrequent on their platforms. Additionally, legal concerns are obstructing red teaming efforts for AI model-building companies. This work is financially supported by Safe Online, but the views presented are solely those of the authors.
Source link
New Report Examines AI-Generated Child Sexual Abuse Material

Leave a Comment
Leave a Comment