Cybersecurity experts warn that “unregulated AI use” is leading to significant risks, as 84% of AI tool providers have experienced data breaches. A recent incident where ChatGPT conversations were indexed in Google search results highlighted the vulnerabilities in AI security. OpenAI responded by retiring the sharing feature, but experts stress that this incident is part of a broader systemic issue. A Business Digital Index study found that while half of major AI providers received high cybersecurity ratings, OpenAI scored a D and Inflection AI an F. Most providers exhibited SSL/TLS misconfigurations and hosting vulnerabilities, with 84% of analyzed AI web tools suffering breaches. Žilvinas Girėnas noted that rapid AI adoption outpaces governance, posing serious risks to sensitive data. With 75% of employees using AI at work, only 14% of organizations have formal AI policies. Cybernews recommends enforcing AI usage policies, auditing vendors for security, and educating employees on the risks associated with unsecured AI tools.
Source link