Microsoft’s threat researchers recently uncovered a phishing campaign leveraging a large language model (LLM) to craft sophisticated malicious code, aiming to obscure payloads and dodge detection. The campaign targeted U.S. organizations using exploited email accounts, with attackers sending emails that appeared to be benign file-sharing notifications. These contained malicious SVG files masquerading as PDFs, embedded with JavaScript. Microsoft’s Security Copilot, utilizing AI, assessed the code’s complexity, indicating it was likely generated by an LLM rather than composed by human hands. While the attackers aimed to automate phishing through enhanced obfuscation, the campaign also highlighted the significance of AI in both offense and defense within cybersecurity. Microsoft emphasized that AI-generated threats remain detectable, as they create identifiable artifacts like verbose naming and redundant logic. Organizations must bolster defenses against these emerging AI-driven phishing tactics by recognizing and anticipating AI-enhanced cybersecurity threats.
Source link