Friday, February 20, 2026

AI Agents Are Improving, But Their Safety Disclosures Are Falling Short

AI agents are gaining traction, with innovations like OpenClaw and Moltbook emerging. These systems can autonomously perform tasks such as planning, coding, and workflow management, making them appealing given their capacity for limited human supervision. However, a study by the MIT AI Agent Index highlights a concerning lack of transparency regarding safety features. While 70% of AI agents provide documentation, only 19% have a formal safety policy, and under 10% undergo external safety evaluations. The researchers emphasize that the autonomy of AI agents brings risks; mistakes can have significant consequences when they interact with sensitive data. Despite showcasing capabilities, developers often withhold critical information about safety protocols and testing processes, creating an imbalance in transparency. As AI agents transition from prototypes to fully integrated systems, it’s essential to address these safety concerns to ensure responsible deployment and maintain user trust.

Source link

Share

Read more

Local News