Security researchers successfully executed prompt injection attacks on three AI agents integrating with GitHub Actions—Anthropic’s Claude Code, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot. These attacks enabled the theft of API keys and access tokens, yet the vendors failed to disclose the vulnerabilities, prompting concerns over user security. Aonan Guan, leading the research with Johns Hopkins University, noted that without public advisories, many users remain oblivious to their vulnerabilities. By manipulating pull request titles and comments, the researchers gained control over the agents, demonstrating that other GitHub-integrated tools could be susceptible. While minor bug bounties were awarded by the vendors, the lack of assigned CVEs raises serious security questions. Guan emphasized treating AI agents as critical employees, advocating for restricted permissions and using allow lists to minimize risks. The findings stress the need for improved security measures across AI agents in software development environments.
Source link
Share
Read more