CVE-2023-39631

9.8 CRITICAL

📋 TL;DR

This vulnerability allows remote attackers to execute arbitrary code through the evaluate function in LangChain's numexpr library integration. It affects LangChain installations using vulnerable versions, potentially compromising AI/ML applications built with this framework. Attackers can achieve remote code execution without authentication.

💻 Affected Systems

Products:
  • LangChain-ai Langchain
Versions: v0.0.245 and earlier versions using vulnerable numexpr integration
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Applications using LangChain's expression evaluation features with numexpr are vulnerable. The vulnerability exists in how LangChain integrates with numexpr's evaluate function.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete system compromise with attacker gaining full control over the server, allowing data theft, lateral movement, and persistent backdoor installation.

🟠

Likely Case

Remote code execution leading to application compromise, data exfiltration, and potential privilege escalation on the host system.

🟢

If Mitigated

Limited impact if proper input validation and sandboxing are implemented, though code execution may still be possible in some contexts.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires sending malicious input to the evaluate function. The vulnerability is well-documented in public repositories with proof-of-concept examples available.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: v0.0.246 and later

Vendor Advisory: https://github.com/langchain-ai/langchain/issues/8363

Restart Required: Yes

Instructions:

1. Update LangChain to version 0.0.246 or later using pip: pip install --upgrade langchain==0.0.246
2. Restart all services using LangChain
3. Verify the update was successful

🔧 Temporary Workarounds

Disable numexpr evaluation

all

Disable the vulnerable numexpr evaluation feature in LangChain configuration

Set environment variable: export LANGCHAIN_DISABLE_NUMEXPR=true
Or modify code: os.environ['LANGCHAIN_DISABLE_NUMEXPR'] = 'true'

Input validation and sanitization

all

Implement strict input validation for all user inputs passed to evaluation functions

🧯 If You Can't Patch

  • Implement network segmentation to isolate LangChain services from critical systems
  • Deploy application-level firewalls with strict input validation rules for evaluation endpoints

🔍 How to Verify

Check if Vulnerable:

Check LangChain version and verify if numexpr evaluation is enabled in your application

Check Version:

python -c "import langchain; print(langchain.__version__)"

Verify Fix Applied:

Verify LangChain version is 0.0.246 or later and test that malicious input to evaluate functions is properly rejected

📡 Detection & Monitoring

Log Indicators:

  • Unusual evaluation function calls
  • Suspicious input patterns in expression evaluation logs
  • Error messages related to numexpr evaluation failures

Network Indicators:

  • Unusual outbound connections from LangChain services
  • Traffic patterns indicating code execution attempts

SIEM Query:

source="langchain" AND (event="evaluate" OR event="numexpr") AND (input="*__import__*" OR input="*os.system*" OR input="*subprocess*" OR input="*eval*")

🔗 References

📤 Share & Export