CVE-2023-36281

9.8 CRITICAL

📋 TL;DR

This vulnerability in LangChain versions before 0.0.312 allows remote attackers to execute arbitrary code by loading a malicious JSON file containing specially crafted templates. Any application using LangChain's load_prompt functionality with untrusted JSON input is affected.

💻 Affected Systems

Products:
  • LangChain
Versions: Versions before 0.0.312
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Applications using load_prompt with JSON files from untrusted sources are vulnerable. The issue involves __subclasses__ method exploitation in templates.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete system compromise with remote code execution leading to data theft, ransomware deployment, or lateral movement within the network.

🟠

Likely Case

Remote code execution on the vulnerable server, potentially allowing attackers to steal sensitive data, install backdoors, or pivot to other systems.

🟢

If Mitigated

Limited impact if proper input validation and sandboxing are implemented, potentially reduced to denial of service.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires loading a malicious JSON file via load_prompt. Public proof-of-concept exists in GitHub issues.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.0.312 and later

Vendor Advisory: https://github.com/langchain-ai/langchain/releases/tag/v0.0.312

Restart Required: No

Instructions:

1. Update LangChain to version 0.0.312 or later using pip install --upgrade langchain==0.0.312. 2. Verify the update with pip show langchain. 3. Test that load_prompt functionality still works with legitimate JSON files.

🔧 Temporary Workarounds

Input Validation and Sanitization

all

Implement strict validation of JSON input before passing to load_prompt, rejecting files with suspicious template content.

Sandbox Execution

all

Run LangChain in a containerized or sandboxed environment with limited permissions to restrict potential RCE impact.

🧯 If You Can't Patch

  • Disable load_prompt functionality or restrict it to trusted, pre-verified JSON files only.
  • Implement network segmentation to isolate vulnerable systems and monitor for suspicious outbound connections.

🔍 How to Verify

Check if Vulnerable:

Check LangChain version with pip show langchain or examine package metadata. If version is below 0.0.312 and load_prompt is used with JSON files, the system is vulnerable.

Check Version:

pip show langchain | grep Version

Verify Fix Applied:

Confirm version is 0.0.312 or higher using pip show langchain. Test that legitimate JSON files still load correctly while attempting to load a test malicious JSON file (in safe environment) should fail safely.

📡 Detection & Monitoring

Log Indicators:

  • Unexpected process execution from LangChain context
  • Errors or warnings related to JSON parsing or template loading in LangChain logs
  • Failed attempts to load malformed JSON files

Network Indicators:

  • Outbound connections from LangChain process to unexpected destinations
  • Command and control traffic patterns from the server

SIEM Query:

process_name:"python" AND (process_command_line:"langchain" OR process_command_line:"load_prompt") AND (event_type:"process_execution" OR event_type:"network_connection")

🔗 References

📤 Share & Export