Google has unveiled a targeted bug bounty program focusing on artificial intelligence, inviting security researchers and ethical hackers to identify vulnerabilities in its AI systems. This initiative, expanding on its existing Vulnerability Reward Program, offers rewards up to $30,000 for identifying serious flaws like unauthorized AI actions or security risks, such as compromising user data or device security. Researchers are encouraged to spot high-risk exploits, including scenarios where an attacker manipulates devices like Google Home or Gmail. Notably, AI issues like content generation problems aren’t eligible for rewards; these should be reported directly to improve model training. The program, enhancing AI security across platforms like Search, Gemini, and Drive, has already led to over $430,000 in earnings for researchers in the past two years. Additionally, Google introduced the CodeMender tool, an AI aimed at detecting and fixing security vulnerabilities in open-source software, highlighting AI’s role in bolstering tech security.
Source link
Share
Read more