CVE-2024-37058
📋 TL;DR
This vulnerability in MLflow allows remote code execution when users interact with maliciously uploaded Langchain AgentExecutor models. Attackers can exploit deserialization flaws to run arbitrary code on affected systems. Organizations using MLflow 2.5.0 or newer for machine learning workflows are at risk.
💻 Affected Systems
- MLflow
📦 What is this software?
Mlflow by Lfprojects
⚠️ Risk & Real-World Impact
Worst Case
Complete system compromise allowing attackers to execute arbitrary commands, steal data, deploy ransomware, or pivot to other systems in the network.
Likely Case
Data exfiltration, credential theft, and installation of backdoors or cryptocurrency miners on vulnerable MLflow servers.
If Mitigated
Limited impact through network segmentation and strict access controls, potentially containing exploitation to isolated environments.
🎯 Exploit Status
Exploitation requires uploading a malicious model and convincing users to interact with it. The deserialization vulnerability makes exploitation straightforward once initial access is achieved.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: 2.13.0
Vendor Advisory: https://hiddenlayer.com/sai-security-advisory/mlflow-june2024
Restart Required: Yes
Instructions:
1. Backup your MLflow data and configurations. 2. Update MLflow to version 2.13.0 or newer using pip: 'pip install --upgrade mlflow>=2.13.0'. 3. Restart all MLflow services. 4. Verify the update was successful.
🔧 Temporary Workarounds
Disable Langchain AgentExecutor model uploads
allTemporarily disable the ability to upload Langchain AgentExecutor models until patching is complete.
Configure MLflow to reject Langchain AgentExecutor model uploads via model registry restrictions
Restrict model upload permissions
allLimit model upload capabilities to trusted administrators only.
Configure MLflow authentication to restrict model registry write access
🧯 If You Can't Patch
- Network segment MLflow instances from critical systems and restrict outbound internet access
- Implement strict monitoring of model uploads and user interactions with suspicious models
🔍 How to Verify
Check if Vulnerable:
Check MLflow version and verify if Langchain AgentExecutor model functionality is enabled.
Check Version:
python -c "import mlflow; print(mlflow.__version__)"
Verify Fix Applied:
Confirm MLflow version is 2.13.0 or newer and test that malicious model uploads are properly sanitized.
📡 Detection & Monitoring
Log Indicators:
- Unusual model upload patterns
- Suspicious file deserialization attempts
- Unexpected process execution from MLflow context
Network Indicators:
- Outbound connections from MLflow servers to unknown destinations
- Unusual data exfiltration patterns
SIEM Query:
source="mlflow" AND (event="model_upload" OR event="deserialization_error") | stats count by user, model_type