CVE-2024-10940

5.3 MEDIUM

📋 TL;DR

This vulnerability in langchain-core allows unauthorized users to read arbitrary files from the host file system by exploiting prompt templates. It affects applications using vulnerable versions of langchain-core that expose prompt template outputs to users, potentially exposing sensitive server files.

💻 Affected Systems

Products:
  • langchain-core
Versions: >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, >=0.3.0,<0.3.15
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Only vulnerable when ImagePromptTemplate or ChatPromptTemplate outputs are exposed to users through model responses or direct output.

⚠️ Manual Verification Required

This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.

Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).

🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.

Recommended Actions:
  1. Review the CVE details at NVD
  2. Check vendor security advisories for your specific version
  3. Test if the vulnerability is exploitable in your environment
  4. Consider updating to the latest version as a precaution

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete server file system disclosure including configuration files, secrets, credentials, and sensitive user data leading to full system compromise.

🟠

Likely Case

Partial file system access exposing configuration files, environment variables, or application secrets that could enable further attacks.

🟢

If Mitigated

Limited impact if prompt outputs are properly sanitized or not exposed to users, with only internal file access possible.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires user access to prompt outputs but no authentication. The vulnerability is well-documented with public proof-of-concept.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.1.53, 0.2.43, 0.3.15

Vendor Advisory: https://github.com/langchain-ai/langchain/commit/c1e742347f9701aadba8920e4d1f79a636e50b68

Restart Required: No

Instructions:

1. Update langchain-core to version 0.1.53, 0.2.43, or 0.3.15 or higher. 2. Run: pip install --upgrade langchain-core. 3. Verify no breaking changes in your application.

🔧 Temporary Workarounds

Input Validation and Sanitization

all

Implement strict input validation on all user-provided paths in prompt templates to prevent directory traversal.

Output Filtering

all

Filter or sanitize prompt template outputs before exposing them to users to prevent file content leakage.

🧯 If You Can't Patch

  • Implement strict network segmentation to isolate vulnerable systems from sensitive data stores.
  • Deploy application-level firewalls to monitor and block suspicious file path patterns in requests.

🔍 How to Verify

Check if Vulnerable:

Check langchain-core version with: pip show langchain-core | grep Version. If version falls within affected ranges and your application exposes prompt template outputs, you are vulnerable.

Check Version:

pip show langchain-core | grep Version

Verify Fix Applied:

After updating, verify version is 0.1.53+, 0.2.43+, or 0.3.15+ and test that file path inputs no longer return file contents.

📡 Detection & Monitoring

Log Indicators:

  • Unusual file path patterns in prompt inputs
  • Multiple failed attempts to access system files
  • Large data transfers from prompt responses

Network Indicators:

  • Requests containing file path traversal patterns (../, /etc/, etc.) in prompt parameters

SIEM Query:

source="application_logs" AND (message="*ImagePromptTemplate*" OR message="*ChatPromptTemplate*") AND (message="*../*" OR message="*/etc/*" OR message="*/root/*")

🔗 References

📤 Share & Export