In 2025, the cybercrime landscape has significantly evolved, with artificial intelligence (AI) playing a pivotal role for malicious actors in underground forums. Google’s Threat Intelligence Group reports a notable growth in the market for illicit AI tools, with mentions increasing by 200% from 2023 to 2024. This rise has made cybercrime more accessible, enabling less seasoned criminals to engage in sophisticated attacks. Key tools include WormGPT, which specializes in phishing and business email compromise, and FraudGPT, offering advanced coding capabilities. The emergence of multi-functional platforms like NYTHEON AI highlights this shift, providing extensive services such as malware creation and deepfake generation. AI-generated phishing attacks have surged by 1,265%, emphasizing their effectiveness. With subscription models mirroring legitimate software, financially motivated threat actors can access advanced capabilities at lower costs. As AI tools democratize cybercrime, organizations face an increasingly complex cybersecurity challenge, with AI-supported phishing representing over 80% of global social engineering activity.
Source link
