CVE-2024-10940
📋 TL;DR
This vulnerability in langchain-core allows unauthorized users to read arbitrary files from the host file system by exploiting prompt templates. It affects applications using vulnerable versions of langchain-core that expose prompt template outputs to users, potentially exposing sensitive server files.
💻 Affected Systems
- langchain-core
⚠️ Manual Verification Required
This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.
Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).
🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.
- Review the CVE details at NVD
- Check vendor security advisories for your specific version
- Test if the vulnerability is exploitable in your environment
- Consider updating to the latest version as a precaution
⚠️ Risk & Real-World Impact
Worst Case
Complete server file system disclosure including configuration files, secrets, credentials, and sensitive user data leading to full system compromise.
Likely Case
Partial file system access exposing configuration files, environment variables, or application secrets that could enable further attacks.
If Mitigated
Limited impact if prompt outputs are properly sanitized or not exposed to users, with only internal file access possible.
🎯 Exploit Status
Exploitation requires user access to prompt outputs but no authentication. The vulnerability is well-documented with public proof-of-concept.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: 0.1.53, 0.2.43, 0.3.15
Vendor Advisory: https://github.com/langchain-ai/langchain/commit/c1e742347f9701aadba8920e4d1f79a636e50b68
Restart Required: No
Instructions:
1. Update langchain-core to version 0.1.53, 0.2.43, or 0.3.15 or higher. 2. Run: pip install --upgrade langchain-core. 3. Verify no breaking changes in your application.
🔧 Temporary Workarounds
Input Validation and Sanitization
allImplement strict input validation on all user-provided paths in prompt templates to prevent directory traversal.
Output Filtering
allFilter or sanitize prompt template outputs before exposing them to users to prevent file content leakage.
🧯 If You Can't Patch
- Implement strict network segmentation to isolate vulnerable systems from sensitive data stores.
- Deploy application-level firewalls to monitor and block suspicious file path patterns in requests.
🔍 How to Verify
Check if Vulnerable:
Check langchain-core version with: pip show langchain-core | grep Version. If version falls within affected ranges and your application exposes prompt template outputs, you are vulnerable.
Check Version:
pip show langchain-core | grep Version
Verify Fix Applied:
After updating, verify version is 0.1.53+, 0.2.43+, or 0.3.15+ and test that file path inputs no longer return file contents.
📡 Detection & Monitoring
Log Indicators:
- Unusual file path patterns in prompt inputs
- Multiple failed attempts to access system files
- Large data transfers from prompt responses
Network Indicators:
- Requests containing file path traversal patterns (../, /etc/, etc.) in prompt parameters
SIEM Query:
source="application_logs" AND (message="*ImagePromptTemplate*" OR message="*ChatPromptTemplate*") AND (message="*../*" OR message="*/etc/*" OR message="*/root/*")