A new era of cyber risk is emerging with AI tools like ChatGPT and Perplexity becoming default search engines. Netcraft’s study reveals alarming findings: one in three AI-suggested login URLs pose dangers. While 66% of recommended domains were accurate, 34% did not belong to the brand, exposing users to phishing threats. For example, Perplexity mistakenly listed a phishing clone for Wells Fargo as the top result, highlighting its potential for harm. Smaller financial institutions are particularly at risk as they lack representation in AI training data, raising the likelihood of incorrect URL suggestions. Cybercriminals are now creating AI-optimized phishing pages, making it imperative for users and organizations to be vigilant. The risk also extends to AI coding assistants, where malicious code can be included unknowingly. As AI becomes integral to online interaction, ensuring security and accuracy must be a priority for AI providers to combat the next generation of phishing attacks.
Source link

Share
Read more