Security researchers from Zenity revealed a critical vulnerability in ChatGPT, demonstrating how a single manipulated Google Doc could exfiltrate sensitive data without user interaction. The exploit involved an invisible prompt—white text in a 1-point font—embedded in a document that, when shared or uploaded to a user’s Google Drive, could activate malicious instructions upon a benign request. This vulnerability made use of OpenAI’s “Connectors,” which link ChatGPT to tools like Gmail and Microsoft 365. Even a simple command like “Summarize my last meeting with Sam” could lead to unauthorized access to sensitive data, including API keys sent to an external server. Although OpenAI promptly patched this specific issue, the underlying attack method could still be executed. As large language models (LLMs) become more prevalent in workplaces, researchers caution that the potential for such data leaks continues to grow, highlighting the urgent need for enhanced security measures.
Source link

Share
Read more