On Thursday, Google revealed the North Korean hacking group UNC2970 leveraged its AI model, Gemini, for reconnaissance activities. This tactic is part of a broader trend where various hacking groups weaponize AI to enhance cyber attack phases, including information operations and model extraction attacks. Google’s Threat Intelligence Group (GTIG) detailed how UNC2970 synthesized open-source intelligence (OSINT) to target cybersecurity firms, mapping technical roles and salary information to create tailored phishing attacks. Notably, this behavior blurs the line between professional research and malicious intent. Other groups, such as UNC6418 and Temp.HEX, similarly employ Gemini to gather sensitive data and conduct operational reconnaissance. Additionally, Google identified malware like HONESTCUE and phishing kits designed for credential theft. Experts warn of an increasing misuse of generative AI in cyber threats, highlighting the necessity for enhanced safeguards and AI-enabled defensive strategies to counteract these evolving threats.
Source link
