OpenAI has issued a warning regarding new artificial intelligence (AI) models potentially posing “high” cybersecurity risks due to their advanced dual-use capabilities. These models may develop working zero-day exploits or assist in complex cyber intrusions, marking a significant concern for cybersecurity. In recent months, OpenAI’s performance in capture-the-flag challenges dramatically improved, raising alarms about their potential misuse. Despite these risks, industry experts like Allan Liska caution against exaggerating threats, noting existing best practices can mitigate vulnerabilities. OpenAI is committed to strengthening its models for defensive capabilities, developing tools to help security teams conduct vital workflows like auditing code and patching vulnerabilities. The company plans to introduce a trusted access program to control user capabilities while establishing the Frontier Risk Council to guide responsible model use. Additionally, OpenAI’s agentic security tool Aardvark is now in private beta, assisting in vulnerability identification and remediation across codebases.
Source link
Share
Read more