Home AI Exposing Data Theft Through Invisible Text: The Vulnerabilities of ChatGPT and Other...

Exposing Data Theft Through Invisible Text: The Vulnerabilities of ChatGPT and Other AI Tools

0
Data theft with invisible text: How easily ChatGPT and other AI tools can be tricked

At the Black Hat USA 2025 conference, researchers introduced the AgentFlayer attack, a significant threat to AI systems like ChatGPT, Microsoft Copilot, and Google Gemini. This method involves embedding hidden instructions within images, utilizing white text on a white background, which is invisible to users but readable by AI. Once delivered, AI can disregard its original task and instead search connected cloud storage for sensitive data, such as access credentials. The data is then discreetly exfiltrated using URLs that load images, sending information to attackers’ servers without detection. Demonstrations showed vulnerabilities in major platforms, prompting OpenAI and Microsoft to issue patches. However, other providers have responded more slowly, with some dismissing these exploits. Researcher Michael Bargury highlighted the severity, noting users can be compromised passively, leading to unintentional data leaks. This attack showcases critical security issues and the need for vigilance in AI system defenses.

Source link

NO COMMENTS

Exit mobile version