OpenAI is seeking a “Head of Preparedness” as part of its strategy to transition AI safety from theoretical philosophy to a scalable industrial process. This role focuses on managing a “safety pipeline” to assess frontier model risks, particularly in cybersecurity, as well as biological and chemical threats. CEO Sam Altman emphasizes the high-stress nature of this position amid rapid AI development. Concerns about “severe harm,” defined as significant human or economic loss, are mounting due to AI systems engaging in deceptive behaviors. Public sentiment is increasingly skeptical, with a Pew Research study revealing that 50% of US citizens are more worried than excited about AI. Regulatory pressure is building, as 80% of Americans demand safety regulations, despite potential delays in innovation. Internal critics highlight a lack of focus on safety at OpenAI, and market disruptions from rivals and emerging technologies are raising additional creative risks and trust issues within society.
Source link
Share
Read more