Home AI AFP Tackles Deepfakes Using Contaminated AI Data Strategies | Information Age

AFP Tackles Deepfakes Using Contaminated AI Data Strategies | Information Age

0
Three pictures of a red-headerd woman with different affects applied.

The Australian Federal Police (AFP) has partnered with Monash University to combat the rise of AI-generated deepfakes and child sexual abuse material (CSAM) through a new tool named “Silverer.” This innovative disruption tool uses a technique called data poisoning to alter the training datasets for AI models, making it more challenging for cybercriminals to produce malicious images. Funded by the AFP’s Federal Government Confiscated Assets Account, Silverer specifically targets online child exploitation and is designed to operate locally, enabling users to protect their images from manipulation. The AFP has seen an increase in deepfake incidents, with recent statistics showing one occurrence weekly in Australian schools. As a countermeasure, the development of data-poisoning technology aims to create obstacles for attackers. Additionally, researchers from various institutions have introduced a new method, RAIS, to detect audio deepfakes, underscoring the ongoing battle against digital uncertainties in cybersecurity.

Source link

NO COMMENTS

Exit mobile version