Cybersecurity researchers have identified three significant security vulnerabilities in Anthropic’s mcp-server-git, associated with the Model Context Protocol (MCP). These flaws, exploitable via prompt injection, permit attackers to manipulate AI assistants without direct system access, impacting all versions released prior to December 8, 2025.
Discovered by Cyata, the vulnerabilities arise from the server’s failure to validate repository paths, allowing attackers to influence AI behavior through malicious README files or compromised descriptions. The risks include arbitrary file deletion and code execution, which, while not directly leading to data exfiltration, can expose sensitive files to AI systems, elevating security concerns.
Given that these issues affect Anthropic’s reference MCP implementation, users are advised to update their systems immediately. The vulnerabilities, categorized as CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145, were acknowledged by Anthropic, with fixes released in December 2025. Users should review their MCP server configurations to minimize risks.
Source link