CVE-2025-64110

7.5 HIGH

📋 TL;DR

A logic bug in Cursor AI code editor versions 1.7.23 and below allows malicious agents to bypass file protection mechanisms. Attackers who achieve prompt injection or use malicious AI models can create new cursorignore files that override existing configurations, potentially exposing sensitive protected files. This affects all users of vulnerable Cursor versions who rely on cursorignore for file protection.

💻 Affected Systems

Products:
  • Cursor AI Code Editor
Versions: All versions up to and including 1.7.23
Operating Systems: Windows, macOS, Linux
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerability requires either prompt injection or malicious AI model access to exploit. Standard cursorignore configurations are vulnerable.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete exposure of sensitive files protected by cursorignore, including credentials, API keys, configuration files, and proprietary source code to malicious actors.

🟠

Likely Case

Targeted extraction of specific sensitive files by attackers who have already achieved prompt injection in the AI assistant context.

🟢

If Mitigated

Limited impact if prompt injection is prevented and AI model access is restricted to trusted sources only.

🌐 Internet-Facing: LOW
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Exploitation requires initial access via prompt injection or malicious AI model. The vulnerability itself is a logic bug that can be triggered once initial access is achieved.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 2.0

Vendor Advisory: https://github.com/cursor/cursor/security/advisories/GHSA-vhc2-fjv4-wqch

Restart Required: Yes

Instructions:

1. Open Cursor editor. 2. Go to Settings/Preferences. 3. Check for updates. 4. Install version 2.0 or higher. 5. Restart Cursor to apply the fix.

🔧 Temporary Workarounds

Disable AI features

all

Temporarily disable AI assistant features to prevent prompt injection attacks

Restrict file access

all

Move sensitive files outside of Cursor project directories or use OS-level file permissions

🧯 If You Can't Patch

  • Implement strict input validation and sanitization for AI prompts to prevent injection attacks
  • Use alternative file protection mechanisms like OS permissions or encryption for sensitive files

🔍 How to Verify

Check if Vulnerable:

Check Cursor version in Help > About or via command line: cursor --version

Check Version:

cursor --version

Verify Fix Applied:

Confirm version is 2.0 or higher and test that cursorignore files cannot be overridden by new files

📡 Detection & Monitoring

Log Indicators:

  • Unexpected cursorignore file modifications
  • AI assistant accessing protected file paths
  • Multiple cursorignore file creation events

Network Indicators:

  • Unusual AI model API calls to file system operations

SIEM Query:

source="cursor.log" AND ("cursorignore" OR "protected file access")

🔗 References

📤 Share & Export