OpenAI has issued a stark warning about rising cybersecurity risks associated with its next-gen AI models, emphasizing their potential to perform tasks typically reserved for advanced hacking groups. This alert comes as the company accelerates its development to compete with Google’s Gemini project. In a recent blog post, OpenAI acknowledged that future models might create zero-day exploits or aid in sophisticated industrial network intrusions. To mitigate these risks, OpenAI is focusing on enhancing models for defensive cybersecurity roles, developing tools for code auditing and vulnerability patching. The company will implement layered safeguards, including strict access controls and monitoring systems. Additionally, OpenAI plans to establish tiered access for vetted researchers and cyber defense groups. A new advisory group, the Frontier Risk Council, will enhance collaboration between OpenAI and security experts. Despite prioritizing risk management, internal pressure for rapid progress is prompting the company to leverage real-time user feedback for faster model training, raising concerns about consistency.
Source link
Share
Read more