CVE-2023-32786

7.5 HIGH

📋 TL;DR

This vulnerability in Langchain allows attackers to inject malicious prompts that force the service to retrieve data from arbitrary URLs, enabling server-side request forgery (SSRF) attacks. This could lead to internal network reconnaissance, data exfiltration, or content injection into downstream AI tasks. All systems running vulnerable Langchain versions are affected.

💻 Affected Systems

Products:
  • Langchain
Versions: 0.0.155 and earlier
Operating Systems: All
Default Config Vulnerable: ⚠️ Yes
Notes: Any Langchain deployment using prompt templates without proper input validation is vulnerable.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete compromise of internal network resources, sensitive data exfiltration, and injection of malicious content into AI workflows leading to downstream exploitation.

🟠

Likely Case

Unauthorized access to internal services, data leakage from internal endpoints, and potential manipulation of AI-generated outputs.

🟢

If Mitigated

Limited impact with proper input validation and network segmentation, potentially only affecting isolated services.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploitation requires crafting malicious prompts but doesn't require authentication to the Langchain service.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 0.0.156 and later

Vendor Advisory: https://github.com/langchain-ai/langchain/security/advisories

Restart Required: Yes

Instructions:

1. Update Langchain to version 0.0.156 or later using pip: pip install --upgrade langchain==0.0.156
2. Restart all Langchain services
3. Verify the update with: pip show langchain

🔧 Temporary Workarounds

Input Validation and Sanitization

all

Implement strict input validation and sanitization for all prompt inputs to prevent URL injection.

Network Segmentation

all

Restrict Langchain service network access to only necessary internal endpoints using firewall rules.

🧯 If You Can't Patch

  • Implement strict input validation to reject prompts containing URL patterns
  • Deploy network controls to restrict Langchain service outbound connections

🔍 How to Verify

Check if Vulnerable:

Check Langchain version: pip show langchain | grep Version

Check Version:

pip show langchain | grep Version

Verify Fix Applied:

Confirm version is 0.0.156 or later and test prompt injection attempts are blocked

📡 Detection & Monitoring

Log Indicators:

  • Unusual outbound HTTP requests from Langchain service
  • Failed prompt validation attempts
  • Requests to internal IP addresses from Langchain

Network Indicators:

  • Langchain service making unexpected HTTP requests
  • Requests to internal network segments from AI service

SIEM Query:

source="langchain" AND (http_request OR url_fetch) AND NOT destination_ip IN [allowed_ips]

🔗 References

📤 Share & Export