Anthropic has revised its Responsible Scaling Policy, removing a crucial pledge that restricted advancing AI training without established safeguards. This change aligns the company more competitively with rivals like OpenAI and Google, which are also revamping their approaches to AI safety. Previously, Anthropic positioned itself as a safety-centric lab, but now asserts that halting AI model training may not be beneficial amid rapid AI progress. Chief Science Officer Jared Kaplan emphasized the need to adapt to industry dynamics. The decision has sparked debates on AI safety’s evolving terminology and implications for investors and policymakers. Analysts, including RAND Corporation’s Edward Geist, noted that the original safety paradigm is now outdated, urging companies to shift their language and practices in light of evolving AI technologies. Despite tensions with the Pentagon regarding full access to its AI, Anthropic aims to convey a commitment to innovation while promoting lighter regulatory measures amidst rising competitive pressures in the AI sector.
Source link
Share
Read more