Anthropic-maintained mcp-server-git has three critical security vulnerabilities that can be exploited through prompt injection, enabling a complete attack chain from arbitrary file access to remote code execution. These vulnerabilities were fixed at the end of 2025, but they remind us that security protections for AI toolchains are far from complete.
Technical Details of the Three High-Risk Vulnerabilities
The three vulnerabilities disclosed by Cyata security researcher are:
Vulnerability ID
Vulnerability Name
Risk Level
Core Issue
CVE-2025-68143
Unrestricted git_init
Critical
git_init tool lacks path restrictions
CVE-2025-68144
git_diff parameter injection
Critical
Insufficient parameter validation leading to command injection
CVE-2025-68145
Path validation bypass
Critical
Missing path validation in repo_path parameter
The integrity of the attack chain
The most dangerous aspect of these three vulnerabilities is their potential for combined exploitation. Since mcp-server-git does not validate the repo_path parameter’s path, an attacker can create Git repositories in arbitrary directories on the system. Coupled with the git_diff parameter injection vulnerability, an attacker can configure cleaning filters in .git/config to run shell commands without requiring execution permissions.
New threats from prompt injection
What makes these vulnerabilities particularly unique is their potential for weaponization via prompt injection. Attackers do not need direct access to the victim’s system; controlling the AI assistant to read malicious content is sufficient to trigger the vulnerabilities. Specifically, attackers can craft malicious README files or compromised web pages to cause AI assistants like Claude to inadvertently execute malicious commands during processing.
When combined with file system MCP servers, these vulnerabilities could allow attackers to execute arbitrary code, delete system files, or read any file contents into the large language model’s context, leading to severe data leaks and system compromise.
Anthropic’s Fixes
After receiving the CVE number on December 17, 2025, Anthropic acted swiftly, releasing a patch on December 18, 2025. The official measures include:
Removing the problematic git_init tool
Strengthening path validation mechanisms
Improving parameter validation logic
According to the latest updates, users need to update mcp-server-git to version 2025.12.18 or later to ensure security.
Lessons for AI Toolchain Security
This incident highlights a broader issue: as AI becomes integrated into more development tools, security complexity increases significantly. MCP (Model Context Protocol), serving as a bridge between AI and system tools, has its security directly impacting the entire system’s safety.
From a technical perspective, when AI assistants can invoke system tools, each vulnerability in those tools can be amplified. Prompt injection as an attack vector renders traditional access control and permission management largely ineffective.
Summary
The three critical vulnerabilities fixed by Anthropic serve as a reminder that security in AI toolchains must be considered from the design phase. While the official response was swift and effective, it is even more important for the industry to establish more comprehensive security auditing mechanisms. For developers using mcp-server-git, immediate updates to the latest version are essential. This also indicates that as AI is deeply integrated into development tools, the scope and impact of security vulnerabilities will grow, requiring increased attention and investment.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Anthropic's official Git tool exposes three major high-risk vulnerabilities, turning the AI assistant into a stepping stone for hacker intrusions
Anthropic-maintained mcp-server-git has three critical security vulnerabilities that can be exploited through prompt injection, enabling a complete attack chain from arbitrary file access to remote code execution. These vulnerabilities were fixed at the end of 2025, but they remind us that security protections for AI toolchains are far from complete.
Technical Details of the Three High-Risk Vulnerabilities
The three vulnerabilities disclosed by Cyata security researcher are:
The integrity of the attack chain
The most dangerous aspect of these three vulnerabilities is their potential for combined exploitation. Since mcp-server-git does not validate the repo_path parameter’s path, an attacker can create Git repositories in arbitrary directories on the system. Coupled with the git_diff parameter injection vulnerability, an attacker can configure cleaning filters in .git/config to run shell commands without requiring execution permissions.
New threats from prompt injection
What makes these vulnerabilities particularly unique is their potential for weaponization via prompt injection. Attackers do not need direct access to the victim’s system; controlling the AI assistant to read malicious content is sufficient to trigger the vulnerabilities. Specifically, attackers can craft malicious README files or compromised web pages to cause AI assistants like Claude to inadvertently execute malicious commands during processing.
When combined with file system MCP servers, these vulnerabilities could allow attackers to execute arbitrary code, delete system files, or read any file contents into the large language model’s context, leading to severe data leaks and system compromise.
Anthropic’s Fixes
After receiving the CVE number on December 17, 2025, Anthropic acted swiftly, releasing a patch on December 18, 2025. The official measures include:
According to the latest updates, users need to update mcp-server-git to version 2025.12.18 or later to ensure security.
Lessons for AI Toolchain Security
This incident highlights a broader issue: as AI becomes integrated into more development tools, security complexity increases significantly. MCP (Model Context Protocol), serving as a bridge between AI and system tools, has its security directly impacting the entire system’s safety.
From a technical perspective, when AI assistants can invoke system tools, each vulnerability in those tools can be amplified. Prompt injection as an attack vector renders traditional access control and permission management largely ineffective.
Summary
The three critical vulnerabilities fixed by Anthropic serve as a reminder that security in AI toolchains must be considered from the design phase. While the official response was swift and effective, it is even more important for the industry to establish more comprehensive security auditing mechanisms. For developers using mcp-server-git, immediate updates to the latest version are essential. This also indicates that as AI is deeply integrated into development tools, the scope and impact of security vulnerabilities will grow, requiring increased attention and investment.