Anthropic, a US AI safety and research company founded in 2021, prioritizes ethical AI use over profits, contrasting with OpenAI, which accommodates military demands. Its AI assistant, Claude, is designed to be helpful, honest, and harmless, but has faced backlash from the Pentagon over ethical concerns. Despite this, U.S. Central Command reportedly utilized Claude during Operations against Iran, highlighting tensions between Trump’s orders to cease using Anthropic’s tech and military reliance on it. Anthropic’s CEO, Dario Amodei, has resisted military pressure to relax its ethical guidelines for AI, which led to potential contract cancellations. Meanwhile, OpenAI’s CEO Sam Altman secured a government deal following failed negotiations with Anthropic. The debate on AI in military contexts continues, with past declarations from experts warning against autonomous weapons. As advanced AI systems integrate into military operations, ethical safeguards and oversight become crucial to balance innovation with legal and moral standards in warfare.
Source link
Share
Read more