A recent security test by CodeWall reveals how rapidly autonomous AI systems can pinpoint vulnerabilities in complex platforms. Utilizing a specialized offensive AI agent, researchers targeted McKinsey’s generative AI platform, Lilli, without requiring login details or insider info. The AI agent analyzed accessible API endpoints and found two dozen inadequately protected interfaces, ultimately exploiting a critical SQL injection vulnerability.
This vulnerability allowed the agent unrestricted access to Lilli’s production database, containing about 46.5 million chat messages and sensitive customer data, including 728,000 confidential files and 57,000 user accounts. The incident underscores the urgency for companies to enhance their security measures against AI-driven attacks, as malicious actors may exploit similar tools to uncover vulnerabilities. Although McKinsey promptly secured the vulnerabilities post-discovery and reported no data breaches, the incident serves as a stark reminder of the growing risks posed by automated cyber threats in a world increasingly reliant on generative AI.
Businesses must bolster their defenses proactively to mitigate these evolving threats.
Source link
