OpenAI is partnering with Anthropic to restrict access to their potent cyber AI technologies due to rising safety concerns. Major frontier AI models like OpenAI’s GPT-5.3-Codex will be available only through the “Trusted Access for Cyber” program, aimed at defensive security operators. This controlled rollout, announced in February, reflects a broader industry shift to invite-only access, ensuring powerful tools remain secure. Anthropic’s own model, Claude Mythos, demonstrated alarming capabilities in detecting vulnerabilities, prompting it to limit access to select organizations like Amazon and Google. Both companies are proactively addressing regulatory scrutiny by curbing general access before governmental intervention. This approach emphasizes responsible use, akin to classified research. As such, powerful AI systems are transitioning from public launches to selective distribution, improving defensive cybersecurity measures in an evolving digital landscape. This trend highlights an urgent need for robust safety protocols amid escalating risks from advanced AI technologies.
Source link
Share
Read more