In a recent development within AI security, vulnerabilities were discovered in Anthropic’s Git server linked to the Model Context Protocol (MCP), affecting its AI model Claude. Security researchers identified three flaws allowing unauthorized access, deletion of files, and even remote code execution (RCE). These vulnerabilities stemmed from inadequate path validation in the mcp-server-git, leading to potential exploitation via prompt injection. Attackers could manipulate AI operations, putting data at risk even in enterprise settings where AI agents are effectively autonomous. Although Anthropic quickly patched these vulnerabilities (CVE-2025-68143, CVE-2025-68144, CVE-2025-68145), industry experts emphasize the need for stricter security measures, such as whitelisting repositories and auditing AI server integrations. This incident highlights the ongoing challenges in balancing innovation with security in AI technologies, calling for standardized protocols and proactive disclosures to enhance trust and safety as AI systems become deeply integrated into critical sectors.
Source link
