Researchers uncovered a critical vulnerability in Google’s Gemini Command Line Interface (CLI), an AI tool for code development. This flaw allowed attackers to execute arbitrary code on users’ machines, demonstrated through a setup where Gemini CLI would exfiltrate sensitive user data, including credentials, to a remote server. The issue stemmed from a dangerous mixture of prompt injection, poor validation, and misleading user experience (UX). By embedding malicious prompts within benign code files, researchers manipulated Gemini, prompting it to act without explicit user awareness. Google promptly addressed the security risk, categorizing it as Priority One after being reported on June 27, and released a patch on July 25. The incident raises significant concerns about the security of AI systems handling sensitive data, echoing worries from privacy advocates about the risks posed by generative AI software. The findings highlight the need for robust security measures in AI tools as vulnerabilities can lead to devastating data breaches.
Source link
Researchers Identify Vulnerability in Google’s AI Coding Assistant That Enables Undetected Code Exfiltration

Share
Read more