Friday, August 22, 2025

“Honey, I Shrunk the Image—Now I’m in Trouble!” • The Register

Security researchers from Trail of Bits have discovered that Google Gemini CLI and other AI systems are vulnerable to image scaling attacks, a form of adversarial attack in machine learning. These attacks utilize hidden prompts in images that can manipulate AI behavior without user awareness. While Google considers this a non-security vulnerability due to its reliance on non-default configurations, the technique exploits AI’s image downscaling process which may expose malicious prompts. Trail of Bits’ researchers, Kikimora Morozova and Suha Sabi Hussain, demonstrated how a victim could unknowingly upload a malicious image to obtain sensitive data through these hidden instructions. They introduced an open-source tool, Anamorpher, for crafting attacks targeting common downscaling algorithms. Google emphasized that these vulnerabilities arise only with specific configurations and that users should maintain secure practices. The researchers call for systematic defenses against prompt injection to safeguard AI systems effectively.

Source link

Share

Read more

Local News