OpenAI and Anthropic are enhancing their AI safety measures by hiring experts in chemical and explosives to mitigate misuse risks. This initiative responds to increasing concerns about the potential danger posed by advanced AI models. Anthropic is looking for a policy specialist to develop and oversee guidelines on handling prompts related to chemical weapons and explosives, enabling real-time threat assessments. Meanwhile, OpenAI is expanding its Preparedness team with researchers and a threat modeller focused on identifying catastrophic risks associated with frontier AI systems. These roles prioritize the alignment of technical, policy, and governance strategies. This recruitment coincides with rising scrutiny over AI safety and national security, as Anthropic challenges its designation as a supply-chain risk, and OpenAI secures classified deployment agreements under strict regulations. Both companies aim to balance commercial growth with the need for robust safety controls within the rapidly evolving AI landscape.
Source link
Share
Read more