CVE-2023-29374
📋 TL;DR
This vulnerability in LangChain's LLMMathChain allows attackers to inject malicious prompts that execute arbitrary Python code via the exec() method. This affects any application using LangChain versions through 0.0.131 that processes untrusted user input through LLMMathChain. The vulnerability enables remote code execution with the privileges of the LangChain process.
💻 Affected Systems
- LangChain
📦 What is this software?
Langchain by Langchain
⚠️ Risk & Real-World Impact
Worst Case
Full system compromise allowing attackers to execute arbitrary commands, access sensitive data, install malware, or pivot to other systems.
Likely Case
Data exfiltration, system manipulation, or denial of service through arbitrary code execution in the application context.
If Mitigated
Limited impact if input validation and sandboxing prevent code execution, though potential for denial of service remains.
🎯 Exploit Status
Exploitation requires crafting malicious prompts that bypass input validation. Public examples demonstrate the attack vector.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: 0.0.132 and later
Vendor Advisory: https://github.com/hwchase17/langchain/pull/1119
Restart Required: No
Instructions:
1. Update LangChain to version 0.0.132 or later using pip: pip install --upgrade langchain>=0.0.132
2. Verify the update completed successfully
3. Test LLMMathChain functionality with safe inputs
🔧 Temporary Workarounds
Disable LLMMathChain
allTemporarily disable or remove LLMMathChain usage until patching is possible
# Remove or comment out LLMMathChain imports and usage in your code
Input Validation
allImplement strict input validation to reject any prompts containing Python code or special characters
# Implement regex filtering: import re
# Filter out dangerous patterns: re.sub(r'exec\(|eval\(|__.*__', '', user_input)
🧯 If You Can't Patch
- Implement strict input validation and sanitization for all user prompts
- Run LangChain in a sandboxed environment with minimal privileges
🔍 How to Verify
Check if Vulnerable:
Check LangChain version: python -c "import langchain; print(langchain.__version__)" - if version <= 0.0.131 and using LLMMathChain, you are vulnerable.
Check Version:
python -c "import langchain; print(langchain.__version__)"
Verify Fix Applied:
After updating, test with a safe prompt and verify version >= 0.0.132. Attempt to inject code should fail safely.
📡 Detection & Monitoring
Log Indicators:
- Unusual Python execution errors
- Suspicious import statements in prompts
- Unexpected process spawns from LangChain
Network Indicators:
- Outbound connections from LangChain to unexpected destinations
- Data exfiltration patterns
SIEM Query:
source="langchain" AND ("exec(" OR "eval(" OR "import " OR "__import__")
🔗 References
- https://github.com/hwchase17/langchain/issues/1026
- https://github.com/hwchase17/langchain/issues/814
- https://github.com/hwchase17/langchain/pull/1119
- https://twitter.com/rharang/status/1641899743608463365/photo/1
- https://github.com/hwchase17/langchain/issues/1026
- https://github.com/hwchase17/langchain/issues/814
- https://github.com/hwchase17/langchain/pull/1119
- https://twitter.com/rharang/status/1641899743608463365/photo/1