CVE-2023-29374

9.8 CRITICAL

📋 TL;DR

This vulnerability in LangChain's LLMMathChain allows attackers to inject malicious prompts that execute arbitrary Python code via the exec() method. This affects any application using LangChain versions through 0.0.131 that processes untrusted user input through LLMMathChain. The vulnerability enables remote code execution with the privileges of the LangChain process.

💻 Affected Systems

Products:
  • LangChain
Versions: 0.0.131 and earlier
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects systems using LLMMathChain with untrusted user input. The vulnerability is in the chain's prompt handling.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Full system compromise allowing attackers to execute arbitrary commands, access sensitive data, install malware, or pivot to other systems.

🟠

Likely Case

Data exfiltration, system manipulation, or denial of service through arbitrary code execution in the application context.

🟢

If Mitigated

Limited impact if input validation and sandboxing prevent code execution, though potential for denial of service remains.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires crafting malicious prompts that bypass input validation. Public examples demonstrate the attack vector.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.0.132 and later

Vendor Advisory: https://github.com/hwchase17/langchain/pull/1119

Restart Required: No

Instructions:

1. Update LangChain to version 0.0.132 or later using pip: pip install --upgrade langchain>=0.0.132
2. Verify the update completed successfully
3. Test LLMMathChain functionality with safe inputs

🔧 Temporary Workarounds

Disable LLMMathChain

all

Temporarily disable or remove LLMMathChain usage until patching is possible

# Remove or comment out LLMMathChain imports and usage in your code

Input Validation

all

Implement strict input validation to reject any prompts containing Python code or special characters

# Implement regex filtering: import re
# Filter out dangerous patterns: re.sub(r'exec\(|eval\(|__.*__', '', user_input)

🧯 If You Can't Patch

  • Implement strict input validation and sanitization for all user prompts
  • Run LangChain in a sandboxed environment with minimal privileges

🔍 How to Verify

Check if Vulnerable:

Check LangChain version: python -c "import langchain; print(langchain.__version__)" - if version <= 0.0.131 and using LLMMathChain, you are vulnerable.

Check Version:

python -c "import langchain; print(langchain.__version__)"

Verify Fix Applied:

After updating, test with a safe prompt and verify version >= 0.0.132. Attempt to inject code should fail safely.

📡 Detection & Monitoring

Log Indicators:

  • Unusual Python execution errors
  • Suspicious import statements in prompts
  • Unexpected process spawns from LangChain

Network Indicators:

  • Outbound connections from LangChain to unexpected destinations
  • Data exfiltration patterns

SIEM Query:

source="langchain" AND ("exec(" OR "eval(" OR "import " OR "__import__")

🔗 References

📤 Share & Export