Artificial intelligence (AI) tools have evolved, enabling functionalities like booking meetings and writing code, transitioning into what is termed “agentic AI.” In response, the National Institute of Standards and Technology (NIST) has launched the AI Agent Standards Initiative to create guidelines for building and securing these AI systems, aiming to maintain U.S. leadership in global AI standards. This initiative falls under the Center for AI Standards and Innovation (CAISI), which replaced the US AI Safety Institute. The growing integration of agentic AI into corporate workflows raises significant security concerns, highlighted by potential vulnerabilities like the “EchoLeak” flaw. Industry experts warn that NIST’s pace in developing regulations may lag behind rapid technological advancements. NIST is soliciting public input on AI risks and safeguards while emphasizing interoperability among AI agents to prevent market fragmentation. With listening sessions planned for April, the path to clearer regulations remains uncertain, leaving companies to navigate risks independently for now.
Source link
