Anthropic’s recent announcement highlights concerns about AI tools being exploited by bad actors for automated hacking. This builds on claims from other AI firms, including OpenAI, which reported disrupting five state-affiliated actors in February 2024, some linked to China. These actors aimed to use AI for various tasks like coding and information querying. However, Anthropic did not disclose how it confirmed the hackers’ ties to the Chinese government. Critics within the cybersecurity community argue that claims of AI-enabled hacks may be exaggerated, emphasizing that current technology remains too cumbersome for effective automated cyberattacks. A November research paper from Google also noted that while AI could innovate malicious software, such tools are still in their testing phases and not particularly successful. In response to these threats, Anthropic advocates for using AI technology as a defense mechanism, acknowledging that while their AI system makes errors, it remains a valuable asset in cybersecurity.
Source link
Share
Read more