Google’s Gemini AI tool is facing vulnerabilities due to “ASCII smuggling attacks,” which exploit its integration with Workspace apps. This tactic, wherein attackers embed invisible malicious prompts within seemingly benign emails, could trigger harmful actions when users request summaries from Gemini. Despite a demonstration by security researcher Viktor Markopoulos highlighting these risks, Google labeled the issue a social engineering problem rather than a security flaw, shifting the responsibility to users. As Gemini interacts with applications like Sheets and Docs, the potential for phishing attacks increases significantly. Attackers can manipulate the AI to falsely alert users about compromised systems or exfiltrate sensitive data, all under the guise of legitimate email content. This underscores the need for heightened awareness and user caution when engaging with AI tools. Follow TechRadar for expert insights, reviews, and the latest tech updates.
Source link