CVE-2024-37058

8.8 HIGH

📋 TL;DR

This vulnerability in MLflow allows remote code execution when users interact with maliciously uploaded Langchain AgentExecutor models. Attackers can exploit deserialization flaws to run arbitrary code on affected systems. Organizations using MLflow 2.5.0 or newer for machine learning workflows are at risk.

💻 Affected Systems

Products:
  • MLflow
Versions: 2.5.0 and newer
Operating Systems: All platforms running MLflow
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects deployments using Langchain AgentExecutor models. Requires model upload capability and user interaction with malicious models.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete system compromise allowing attackers to execute arbitrary commands, steal data, deploy ransomware, or pivot to other systems in the network.

🟠

Likely Case

Data exfiltration, credential theft, and installation of backdoors or cryptocurrency miners on vulnerable MLflow servers.

🟢

If Mitigated

Limited impact through network segmentation and strict access controls, potentially containing exploitation to isolated environments.

🌐 Internet-Facing: HIGH
🏢 Internal Only: MEDIUM

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: LIKELY
Unauthenticated Exploit: ✅ No
Complexity: LOW

Exploitation requires uploading a malicious model and convincing users to interact with it. The deserialization vulnerability makes exploitation straightforward once initial access is achieved.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: 2.13.0

Vendor Advisory: https://hiddenlayer.com/sai-security-advisory/mlflow-june2024

Restart Required: Yes

Instructions:

1. Backup your MLflow data and configurations. 2. Update MLflow to version 2.13.0 or newer using pip: 'pip install --upgrade mlflow>=2.13.0'. 3. Restart all MLflow services. 4. Verify the update was successful.

🔧 Temporary Workarounds

Disable Langchain AgentExecutor model uploads

all

Temporarily disable the ability to upload Langchain AgentExecutor models until patching is complete.

Configure MLflow to reject Langchain AgentExecutor model uploads via model registry restrictions

Restrict model upload permissions

all

Limit model upload capabilities to trusted administrators only.

Configure MLflow authentication to restrict model registry write access

🧯 If You Can't Patch

  • Network segment MLflow instances from critical systems and restrict outbound internet access
  • Implement strict monitoring of model uploads and user interactions with suspicious models

🔍 How to Verify

Check if Vulnerable:

Check MLflow version and verify if Langchain AgentExecutor model functionality is enabled.

Check Version:

python -c "import mlflow; print(mlflow.__version__)"

Verify Fix Applied:

Confirm MLflow version is 2.13.0 or newer and test that malicious model uploads are properly sanitized.

📡 Detection & Monitoring

Log Indicators:

  • Unusual model upload patterns
  • Suspicious file deserialization attempts
  • Unexpected process execution from MLflow context

Network Indicators:

  • Outbound connections from MLflow servers to unknown destinations
  • Unusual data exfiltration patterns

SIEM Query:

source="mlflow" AND (event="model_upload" OR event="deserialization_error") | stats count by user, model_type

🔗 References

📤 Share & Export