Uncovering Vulnerabilities in AI Voice Assistants: The 11.ai Case
As AI voice assistants seamlessly integrate into our daily routines, they also present significant security risks. A recent discovery by Repello AI highlights a critical vulnerability in the 11.ai assistant, developed by Eleven Labs.
Key Findings:
- Exploitable Framework: The issue exists within the Model Context Protocol (MCP), allowing attackers to infiltrate sensitive data without direct user interaction—a method termed Cross-Context Prompt Injection Attack (XPIA).
- Attack Vector: A malicious calendar invite can trigger data exfiltration from Google Calendar, sending private information to shadowy hands.
Why This Matters:
- AI systems with tool access blur the lines between trusted and untrusted contexts, raising urgent security concerns.
- Attackers can manipulate AI behavior without being detected, using innocuous-looking data.
Call to Action:
Stay ahead of the curve in AI security! Watch our proof-of-concept video and reach out to our team at contact@repello.ai for insights on securing tool-integrated AI systems. Let’s make AI safer together! Share this to spread awareness.