OpenAI is enhancing its internal security measures to safeguard its intellectual property from corporate espionage, particularly due to allegations involving Chinese AI firms. A recent report from the Financial Times indicates that OpenAI has implemented stricter controls and conducted rigorous staff vetting following the launch of DeepSeek’s R1 model, which allegedly trained on ChatGPT data. To prevent a repeat of this situation, OpenAI has adopted a “tenting” system, limiting project access to select team members. Additional security measures include biometric authentication, a “deny-by-default” internet policy, and air-gapped infrastructure to protect critical data. With former Palantir security chief Dane Stuckey now as CISO and retired General Paul Nakasone on its board, OpenAI’s focus on cybersecurity underscores the growing importance of protecting generative AI models. However, these measures have led to internal friction, making cross-team collaboration and development workflows more challenging, revealing a broader industry trend towards securing AI innovations.
Source link