CVE-2024-8063

7.5 HIGH

📋 TL;DR

A divide-by-zero vulnerability in ollama/ollama v0.3.3 allows attackers to cause denial of service by importing malicious GGUF models with crafted block_count values. This affects anyone running vulnerable ollama servers that process untrusted model files. The server crashes when processing the malicious model, disrupting AI inference services.

💻 Affected Systems

Products:
  • ollama/ollama
Versions: v0.3.3
Operating Systems: All platforms running ollama
Default Config Vulnerable: ⚠️ Yes
Notes: Vulnerable when importing GGUF models via Modelfile. Only affects servers that process untrusted or malicious model files.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete service outage of ollama server, disrupting all AI model inference capabilities and potentially affecting dependent applications.

🟠

Likely Case

Server crash requiring manual restart, causing temporary service disruption until recovery.

🟢

If Mitigated

No impact if models are validated before import or from trusted sources only.

🌐 Internet-Facing: MEDIUM - Requires model import capability and processing of malicious files, but could be exploited if server accepts external model uploads.
🏢 Internal Only: LOW - Lower risk if models come from trusted internal sources and import controls are in place.

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Exploitation requires crafting a malicious GGUF model file with specific block_count values and convincing the server to import it.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: v0.3.4 or later

Vendor Advisory: https://github.com/ollama/ollama/security/advisories/GHSA-xxxx-xxxx-xxxx

Restart Required: No

Instructions:

1. Update ollama to v0.3.4 or later using your package manager. 2. For manual install: Download latest release from GitHub. 3. Replace existing binary. 4. No restart needed for patch application.

🔧 Temporary Workarounds

Restrict model imports

all

Only import models from trusted, verified sources. Implement validation for GGUF files before processing.

Network segmentation

all

Isolate ollama servers from untrusted networks and restrict model upload capabilities.

🧯 If You Can't Patch

  • Implement strict model validation: reject GGUF files with suspicious block_count values before processing.
  • Monitor server logs for crash events and implement automated restart mechanisms for resilience.

🔍 How to Verify

Check if Vulnerable:

Check ollama version: ollama --version. If output shows v0.3.3, system is vulnerable.

Check Version:

ollama --version

Verify Fix Applied:

After update, run ollama --version and confirm version is v0.3.4 or higher.

📡 Detection & Monitoring

Log Indicators:

  • Server crash logs with divide-by-zero errors
  • Unexpected termination of ollama process
  • Error messages related to GGUF model parsing

Network Indicators:

  • Sudden drop in ollama service availability
  • Failed model import requests

SIEM Query:

process:ollama AND (event:crash OR error:"divide by zero" OR error:GGUF)

🔗 References

📤 Share & Export