CVE-2024-3271

9.8 CRITICAL

📋 TL;DR

A command injection vulnerability in the run-llama/llama_index repository allows attackers to bypass security checks and execute arbitrary code on servers. This remote code execution vulnerability affects any system using the vulnerable safe_eval function. Users of llama_index who process untrusted input are at risk.

💻 Affected Systems

Products:
  • run-llama/llama_index
Versions: Versions before commit 5fbcb5a8b9f20f81b791c7fc8849e352613ab475
Operating Systems: All platforms running Python
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerability exists in the safe_eval function when processing LLM-generated code

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete server compromise allowing data theft, lateral movement, and persistent backdoor installation

🟠

Likely Case

Unauthorized code execution leading to data exfiltration, service disruption, or cryptocurrency mining

🟢

If Mitigated

Limited impact with proper network segmentation and least privilege controls

🌐 Internet-Facing: HIGH - Directly exploitable via web applications using the vulnerable function
🏢 Internal Only: MEDIUM - Requires access to internal systems but can lead to lateral movement

🎯 Exploit Status

Public PoC: ⚠️ Yes
Weaponized: LIKELY
Unauthenticated Exploit: ⚠️ Yes
Complexity: LOW

Exploit bypasses underscore check by using alternative command injection techniques

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: Commit 5fbcb5a8b9f20f81b791c7fc8849e352613ab475 and later

Vendor Advisory: https://github.com/run-llama/llama_index/commit/5fbcb5a8b9f20f81b791c7fc8849e352613ab475

Restart Required: No

Instructions:

1. Update llama_index to latest version. 2. Verify commit hash includes 5fbcb5a8b9f20f81b791c7fc8849e352613ab475. 3. Test safe_eval function with malicious inputs.

🔧 Temporary Workarounds

Disable safe_eval function

all

Temporarily disable or replace the vulnerable safe_eval function with a secure alternative

# Modify code to bypass safe_eval or implement input validation

Input sanitization wrapper

all

Add additional input validation before passing to safe_eval

# Implement regex filtering for dangerous characters and patterns

🧯 If You Can't Patch

  • Network segmentation to isolate vulnerable systems from critical assets
  • Implement strict input validation and output encoding for all user-supplied data

🔍 How to Verify

Check if Vulnerable:

Check if your code uses safe_eval function from llama_index and verify version

Check Version:

git log --oneline | grep 5fbcb5a8b9f20f81b791c7fc8849e352613ab475

Verify Fix Applied:

Test safe_eval with crafted inputs containing command injection attempts without underscores

📡 Detection & Monitoring

Log Indicators:

  • Unusual process spawns from Python applications
  • Unexpected system command execution patterns

Network Indicators:

  • Outbound connections from application servers to unusual destinations
  • Command and control traffic patterns

SIEM Query:

source="application.logs" AND ("safe_eval" OR "llama_index") AND (process_execution OR command_injection)

🔗 References

📤 Share & Export