CVE-2025-0315
📋 TL;DR
A memory exhaustion vulnerability in Ollama allows attackers to upload specially crafted GGUF model files that cause unlimited memory allocation, leading to Denial of Service. This affects all Ollama servers running vulnerable versions, potentially disrupting AI model serving capabilities.
💻 Affected Systems
- ollama/ollama
📦 What is this software?
Ollama by Ollama
⚠️ Risk & Real-World Impact
Worst Case
Complete server crash and service unavailability due to memory exhaustion, requiring manual intervention to restart services and clear memory.
Likely Case
Temporary service degradation or crashes affecting model inference capabilities until memory is freed or service restarted.
If Mitigated
Limited impact with proper monitoring and resource limits in place, potentially causing only temporary performance issues.
🎯 Exploit Status
Exploitation requires ability to upload custom GGUF model files to the Ollama server, which typically requires some level of access or API usage.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: > 0.3.14
Vendor Advisory: https://huntr.com/bounties/da414d29-b55a-496f-b135-17e0fcec67bc
Restart Required: No
Instructions:
1. Update Ollama to version 0.3.15 or later. 2. Use package manager: 'ollama upgrade' or download latest release from official repository. 3. Verify update with 'ollama --version'.
🔧 Temporary Workarounds
Restrict model uploads
allLimit who can upload custom GGUF model files to the Ollama server
Implement memory limits
allUse container or system-level memory limits to prevent complete exhaustion
docker run --memory=4g ollama/ollama
systemd: MemoryLimit=4G in service file
🧯 If You Can't Patch
- Implement strict access controls to prevent unauthorized users from uploading model files
- Deploy monitoring for abnormal memory usage patterns and implement automated alerts
🔍 How to Verify
Check if Vulnerable:
Check Ollama version with 'ollama --version' and verify if <= 0.3.14
Check Version:
ollama --version
Verify Fix Applied:
Confirm version > 0.3.14 with 'ollama --version' and test model upload functionality
📡 Detection & Monitoring
Log Indicators:
- Abnormal memory allocation patterns
- Multiple large model upload attempts
- Server crash logs with out-of-memory errors
Network Indicators:
- Unusual volume of model upload requests
- Large file transfers to Ollama API endpoints
SIEM Query:
source="ollama.log" AND ("out of memory" OR "memory allocation" OR "model upload")