OpenAI has finalized a contract with the U.S. Department of War, following Anthropic’s withdrawal from similar negotiations due to ethical concerns over surveillance and autonomous weapons. This move has led to backlash from ChatGPT users, who accuse OpenAI of crossing an ethical line. While Anthropic wanted explicit safeguards against mass surveillance, the Department declined, prompting them to exit talks. OpenAI, however, claims its agreement includes strict controls prohibiting mass domestic surveillance and the use of AI for autonomous weaponry. Despite these assurances, many users are canceling subscriptions, citing a perceived betrayal of ethical standards. Executives argue that engagement with government can promote responsible AI use, yet critics worry about the vague implications of “lawful purposes” in national security. The controversy highlights a growing divide in the AI industry between pursuing lucrative government contracts and maintaining ethical integrity. As discussions continue, both companies face scrutiny over their commitments to ethical AI deployment.
Source link
