OpenAI has significantly revamped its security operations to safeguard against corporate espionage, particularly in light of competition from Chinese startup DeepSeek, which allegedly copied OpenAI’s models. According to the Financial Times, the enhanced security measures include “information tenting” policies that restrict staff access to sensitive algorithms during project development. Only verified team members can discuss specific projects in shared spaces, ensuring tighter control over information.
Additional measures involve isolating proprietary technology in offline systems and employing biometric access controls, such as fingerprint scanning. Furthermore, a “deny-by-default” internet policy mandates explicit approval for external connections, enhancing data security. The report suggests these changes are driven by rising fears of foreign entities attempting to steal intellectual property, alongside addressing internal security challenges amid competitive pressures in the American AI landscape. OpenAI’s increased physical security at data centers and expanded cybersecurity personnel underscore its commitment to protecting sensitive information.
Source link