CVE-2023-34541

9.8 CRITICAL

📋 TL;DR

Langchain 0.0.171 contains a vulnerability in the load_prompt function that allows arbitrary code execution when loading malicious prompt files. This affects any application using Langchain's prompt loading functionality with untrusted input. Users of Langchain who load prompts from external sources are particularly vulnerable.

💻 Affected Systems

Products:
  • Langchain
Versions: 0.0.171
Operating Systems: All platforms running Python
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerability exists in the default configuration when using load_prompt with untrusted input.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete system compromise allowing attackers to execute arbitrary code with the privileges of the Langchain process, potentially leading to data theft, ransomware deployment, or lateral movement within the network.

🟠

Likely Case

Remote code execution leading to unauthorized access, data exfiltration, or installation of backdoors on affected systems.

🟢

If Mitigated

Limited impact if prompt loading is restricted to trusted sources and proper input validation is implemented.

🌐 Internet-Facing: HIGH if Langchain applications are exposed to untrusted users who can upload or control prompt files.
🏢 Internal Only: MEDIUM if only internal users can access prompt loading functionality, but risk remains from insider threats or compromised accounts.

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

The vulnerability is straightforward to exploit by crafting malicious prompt files that trigger code execution when loaded.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.0.172 and later

Vendor Advisory: https://github.com/hwchase17/langchain/issues/4849

Restart Required: No

Instructions:

1. Update Langchain using pip: pip install --upgrade langchain>=0.0.172
2. Verify the update completed successfully
3. Test prompt loading functionality to ensure compatibility

🔧 Temporary Workarounds

Restrict Prompt Sources

all

Only load prompts from trusted, controlled sources and avoid loading prompts from user input or untrusted locations.

Input Validation

all

Implement strict validation of prompt files before passing them to load_prompt function.

🧯 If You Can't Patch

  • Isolate Langchain applications in restricted network segments with minimal permissions
  • Implement strict access controls and monitoring for prompt loading operations

🔍 How to Verify

Check if Vulnerable:

Check Langchain version: python -c "import langchain; print(langchain.__version__)" - if output is 0.0.171, system is vulnerable.

Check Version:

python -c "import langchain; print(langchain.__version__)"

Verify Fix Applied:

After updating, verify version is 0.0.172 or higher using the same command.

📡 Detection & Monitoring

Log Indicators:

  • Unexpected process execution from Langchain applications
  • Abnormal file access patterns in prompt directories
  • Errors or warnings related to prompt loading

Network Indicators:

  • Unexpected outbound connections from Langchain processes
  • Suspicious file downloads to prompt directories

SIEM Query:

Process creation where parent process contains 'python' and command line contains 'langchain' AND (process name is suspicious OR destination IP is external)

🔗 References

📤 Share & Export