A critical vulnerability known as “ContextCrush” has been uncovered in the Context7 MCP Server, extensively used for delivering documentation to AI coding assistants. Discovered by Noma Labs, this flaw poses a significant risk as it enables attackers to inject malicious instructions into AI tools through a trusted documentation channel. Context7, operated by Upstash, plays a crucial role in AI-assisted development with approximately 50,000 GitHub stars and over 8 million npm downloads.
The vulnerability arises from the “Custom Rules” feature, where AI-specific instructions are submitted without filtering. Attackers can exploit this by inserting harmful commands into library documentation, which AI assistants interpret as legitimate guidance. Researchers demonstrated the potential impact by showcasing an attack that led the AI to exfiltrate sensitive information.
Upstash quickly addressed the issue, rolling out fixes that included rule sanitization and enhanced safeguards. Analysts caution that such vulnerabilities highlight significant trust issues inherent in MCP server architectures.
Source link