Agentic AI Vulnerability Exposes Gmail Data Risk
A recent security flaw in OpenAI’s ChatGPT Deep Research agent, dubbed ShadowLeak, poses a significant threat to Gmail users. Security researchers at Radware discovered that, when granted access, the AI could extract sensitive user data, like names and addresses, without any user interactions. This no-click exploit enables attackers to siphon data directly from OpenAI’s infrastructure, leaving no traces for organizations to detect.
Radware’s analysis emphasizes the risks of prompt injection tactics, where hidden commands manipulate AI behaviors to perform unauthorized tasks. The vulnerability affects not only Gmail but also services like Microsoft Outlook and Google Drive, broadening the potential attack surface. As AI agents increasingly operate autonomously, organizations must ensure robust governance, visibility, and logging practices to mitigate these risks. OpenAI has since patched the vulnerability, highlighting the importance of continuous security evaluations in AI development.