CVE-2023-38896

9.8 CRITICAL

📋 TL;DR

This vulnerability in LangChain allows remote attackers to execute arbitrary code through the from_math_prompt and from_colored_object_prompt functions. It affects LangChain versions 0.0.194 and earlier. Any application using these vulnerable functions is at risk of complete system compromise.

💻 Affected Systems

Products:
  • LangChain
Versions: v0.0.194 and earlier
Operating Systems: All platforms running Python
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects applications using the vulnerable from_math_prompt or from_colored_object_prompt functions.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete system takeover with remote code execution, data exfiltration, lateral movement, and persistent backdoor installation.

🟠

Likely Case

Remote code execution leading to application compromise, data theft, and potential privilege escalation.

🟢

If Mitigated

Limited impact with proper input validation and sandboxing, potentially preventing code execution.

🌐 Internet-Facing: HIGH - Remote exploitation without authentication makes internet-facing systems primary targets.
🏢 Internal Only: MEDIUM - Internal systems are still vulnerable but require network access.

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation details are publicly available in GitHub issues and security research.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: v0.0.195 and later

Vendor Advisory: https://github.com/hwchase17/langchain/pull/6003

Restart Required: No

Instructions:

1. Update LangChain: pip install --upgrade langchain>=0.0.195
2. Verify no vulnerable functions are called with untrusted input
3. Test application functionality after update

🔧 Temporary Workarounds

Disable vulnerable functions

all

Remove or disable calls to from_math_prompt and from_colored_object_prompt functions

# Review code for these function calls and remove/disable them

Input validation and sanitization

all

Implement strict input validation for any prompt inputs

# Add input validation before passing data to LangChain functions

🧯 If You Can't Patch

  • Implement network segmentation to isolate LangChain applications
  • Deploy application-level firewalls with strict input validation rules

🔍 How to Verify

Check if Vulnerable:

Check LangChain version: pip show langchain | grep Version
If version <= 0.0.194 and using vulnerable functions, system is vulnerable.

Check Version:

pip show langchain | grep Version

Verify Fix Applied:

Verify version >= 0.0.195: pip show langchain | grep Version
Test that vulnerable functions now properly sanitize input.

📡 Detection & Monitoring

Log Indicators:

  • Unusual process execution from LangChain context
  • Error logs showing code execution failures in prompt functions
  • Unexpected network connections from LangChain processes

Network Indicators:

  • Outbound connections from LangChain to unexpected destinations
  • Command and control traffic patterns

SIEM Query:

source="application.logs" AND ("from_math_prompt" OR "from_colored_object_prompt") AND ("exec" OR "system" OR "subprocess")

🔗 References

📤 Share & Export