AI Image Generation Misused for Non-Consensual Deepfakes
AI image-generation tools from Google and OpenAI are under scrutiny due to misuse involving the creation of non-consensual deepfake images. A WIRED investigation revealed that users on Reddit shared methods to transform clothed images of women into revealing bikini pictures, bypassing safety protocols. This led to the removal of harmful content and the banning of the subreddit r/ChatGPTjailbreak, which had over 200,000 members. Both companies emphasize their commitment to prohibiting explicit content but have struggled to keep pace with rapid technological advancements, leading to growing concerns over non-consensual intimate imagery. Legislative measures are gaining momentum, with the introduction of the Deepfake Liability Act in the U.S. aimed at holding platforms accountable for failing to remove such images. The UK is exploring further protections against the creation and distribution of non-consensual deepfakes, emphasizing the safety of women online.
