OpenAI recently secured a deal with the Department of War, aiming to build trust by publishing contract details, though backlash has arisen. After President Trump ordered federal agencies to halt using Anthropic’s technology due to its stance against mass surveillance and autonomous weapons, OpenAI quickly adopted similar red lines. However, questions arose about why the Pentagon accepted OpenAI’s terms but not Anthropic’s.
Despite OpenAI’s assurances, critics highlight vague language, particularly the phrase “all lawful purposes,” which might enable loopholes. Anthropic’s CEO, Dario Amodei, pointed out that the government could still exploit AI without it being classified as mass surveillance. Furthermore, OpenAI’s guideline on autonomous weapons lacks clarity, raising concerns about human oversight. Ultimately, while OpenAI attempts to clarify their position, doubts persist regarding their contract guarantees, especially in today’s political landscape where legal frameworks are frequently overlooked. This has led users to rally behind Anthropic, driving its Claude app to the top of the App Store.
Source link