OpenAI CEO Sam Altman announced the company’s collaboration with the Pentagon to establish safety protocols for AI models in military applications. Altman highlighted OpenAI’s commitment against using AI for mass surveillance or autonomous lethal weapons, insisting on keeping humans involved in critical automated decisions. This stance contrasts with competitor Anthropic, whose AI model Claude has faced scrutiny after its tech was reportedly involved in the U.S. government’s operation against Venezuelan President Nicolás Maduro. Anthropic, awarded a $200 million contract for “agentic workflows,” is under pressure from the Pentagon to relax surveillance restrictions. The startup has until Friday to negotiate or face potential repercussions. Meanwhile, industry giants like Amazon, Google, and Microsoft are urged by employees to back Anthropic’s position and implement similar ethical guardrails in military collaborations. This growing tension underscores the critical need for responsible AI practices amidst evolving government and corporate dynamics in defense technology.
Source link
Share
Read more