Skip to content

Attention AI Enthusiasts: Cybercriminals Are Exploiting Jailbroken Mistral and Grok Tools to Develop Advanced Malware

admin

Recent research highlights the increasing use of AI tools, such as Grok and Mixtral, by cybercriminals, posing significant security risks. These legitimate tools are being exploited to create “WormGPT” variants—malicious generative AI that generates harmful code, social engineering tactics, and hacking tutorials. Experts from Cato CTRL noted a surge in alternative uncensored LLMs, like FraudGPT, enhancing the toolkit for cybercriminals. Some strains of WormGPT, such as keanu-WormGPT, are capable of phishing email generation, even revealing reliance on Grok after their security measures were bypassed. Threat actors are also attempting to jailbreak established AI models like ChatGPT to evade safety protocols. Moreover, there’s a growing trend of such actors recruiting AI specialists to design custom LLMs for specific malicious purposes. As AI lowers barriers to entry for cybercrime, these developments may lead to a rise in cyber threats in the near future.

Source link

Share This Article
Leave a Comment