On February 5, Anthropic launched Claude Opus 4.6, an advanced AI capable of coordinating teams of autonomous agents to perform tasks in parallel. Shortly after, they released the more affordable Sonnet 4.6, which offers similar capabilities. These models can fill forms and navigate web applications at a human-like level, significantly enhancing their utility in enterprise settings, which now account for 80% of Anthropic’s revenue. However, the company faces challenges as the Pentagon considers labeling it a “supply chain risk” due to its restrictions on military use. Following a controversial operation involving U.S. special forces and Claude, tensions have escalated, leading to concerns about the ethical implications of AI in classified military applications. Anthropic maintains a commitment to avoiding mass surveillance and fully autonomous weapons. Yet, the line between ethical AI usage and military demands is increasingly blurred, prompting discussions about balancing safety and national security in AI development.
Source link
Share
Read more