CVE-2024-21513
📋 TL;DR
This vulnerability allows arbitrary Python code execution in langchain-experimental when using VectorSQLDatabaseChain. Attackers who can control input prompts can execute malicious code on affected systems. Users of langchain-experimental versions 0.0.15 through 0.0.20 with VectorSQLDatabaseChain enabled are vulnerable.
💻 Affected Systems
- langchain-experimental
📦 What is this software?
⚠️ Risk & Real-World Impact
Worst Case
Full system compromise allowing attackers to execute arbitrary Python code, exfiltrate data, install malware, pivot to other systems, and maintain persistence.
Likely Case
Data exfiltration from the database, privilege escalation within the application environment, and potential lateral movement within the network.
If Mitigated
Limited to database query results only, with no code execution capability if proper input validation and sandboxing are implemented.
🎯 Exploit Status
Exploitation requires attacker to control input prompts and VectorSQLDatabaseChain to be enabled. The eval() vulnerability is well-understood and easily weaponized.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: 0.0.21 and later
Vendor Advisory: https://github.com/langchain-ai/langchain/commit/7b13292e3544b2f5f2bfb8a27a062ea2b0c34561
Restart Required: Yes
Instructions:
1. Update langchain-experimental to version 0.0.21 or later using pip install --upgrade langchain-experimental. 2. Restart all services using the library. 3. Verify the fix by checking the version and testing VectorSQLDatabaseChain functionality.
🔧 Temporary Workarounds
Disable VectorSQLDatabaseChain
allRemove or disable VectorSQLDatabaseChain configuration if not required for functionality
# Remove VectorSQLDatabaseChain from your langchain configuration
# Comment out or remove VectorSQLDatabaseChain initialization
Input Validation and Sanitization
allImplement strict input validation and sanitization for all prompts before processing
# Implement input validation in your application code
# Example: validate prompt contains only allowed characters
# Use libraries like bleach or custom regex patterns
🧯 If You Can't Patch
- Implement strict network segmentation to isolate systems using vulnerable versions
- Deploy application-level firewalls with WAF rules to detect and block malicious prompt patterns
🔍 How to Verify
Check if Vulnerable:
Check if langchain-experimental version is between 0.0.15 and 0.0.20 inclusive, and VectorSQLDatabaseChain is configured in your application.
Check Version:
pip show langchain-experimental | grep Version
Verify Fix Applied:
Verify langchain-experimental version is 0.0.21 or higher and test VectorSQLDatabaseChain functionality with safe test prompts.
📡 Detection & Monitoring
Log Indicators:
- Unusual Python eval() errors in application logs
- Suspicious database queries with Python code patterns
- Unexpected process spawns from langchain processes
Network Indicators:
- Outbound connections from langchain processes to unexpected destinations
- Unusual data exfiltration patterns from application servers
SIEM Query:
source="application.logs" AND "langchain_experimental" AND ("eval" OR "exec" OR "__import__")
🔗 References
- https://github.com/langchain-ai/langchain/blob/672907bbbb7c38bf19787b78e4ffd7c8a9026fe4/libs/experimental/langchain_experimental/sql/vector_sql.py%23L81
- https://github.com/langchain-ai/langchain/commit/7b13292e3544b2f5f2bfb8a27a062ea2b0c34561
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAINEXPERIMENTAL-7278171
- https://github.com/langchain-ai/langchain/blob/672907bbbb7c38bf19787b78e4ffd7c8a9026fe4/libs/experimental/langchain_experimental/sql/vector_sql.py%23L81
- https://github.com/langchain-ai/langchain/commit/7b13292e3544b2f5f2bfb8a27a062ea2b0c34561
- https://security.snyk.io/vuln/SNYK-PYTHON-LANGCHAINEXPERIMENTAL-7278171