A vulnerability in Google Calendar enabled private meeting data leakage due to hidden prompts embedded in calendar invites. Researchers from Miggo discovered that attackers could insert malicious instructions in event descriptions, bypassing privacy controls when Google’s Gemini processed calendar data. This indirect prompt injection exploited the AI’s natural language interpretation, treating harmful commands as trusted information. In a proof-of-concept attack, Gemini was directed to summarize private meetings, allowing unauthorized data access without alerting users. Google confirmed the issue and implemented a fix following responsible disclosure. This incident underscores the escalating security risks associated with AI’s interpretation of natural language inputs. Stay informed about AI, tech advancements, and digital diplomacy by engaging with our Diplo chatbot!
Source link
