Skip to content

AI Makes ‘Protected’ Images Easier to Steal, Not Harder

admin

Research indicates that watermarking tools intended to protect images from AI manipulations, like Stable Diffusion, may inadvertently enhance the AI’s editing capabilities. While these tools aim to prevent copyrighted visuals from being utilized in generative AI processes, new findings suggest that adding adversarial noise to images can paradoxically improve their responsiveness to editing prompts, leading to better-aligned outputs instead of the intended protection. Various methods, including PhotoGuard, Mist, and Glaze, were tested across natural and artistic images, revealing that added noise increased the AI’s effectiveness in modifying these protected images. As a result, the researchers argue that current perturbation-based protections may provide a false sense of security, inadvertently facilitating unauthorized content exploitation. The study emphasizes the need for more effective methods of safeguarding against AI-driven manipulations, hinting at the limitations of adversarial perturbations in protecting intellectual property rights.

Source link

Share This Article
Leave a Comment