CVE-2025-47291
📋 TL;DR
A bug in containerd's CRI implementation fails to place usernamespaced containers under Kubernetes' cgroup hierarchy, causing Kubernetes resource limits to be ignored. This can lead to denial of service on Kubernetes nodes by allowing containers to consume excessive resources. Affected users are those running containerd versions 2.0.1 through 2.0.4 in Kubernetes environments with usernamespaced pods enabled.
💻 Affected Systems
- containerd
📦 What is this software?
Containerd by Linuxfoundation
⚠️ Risk & Real-World Impact
Worst Case
Complete node resource exhaustion leading to system-wide denial of service, affecting all workloads on the compromised node.
Likely Case
Resource starvation for other containers on the same node, causing performance degradation or service disruption.
If Mitigated
Minimal impact if proper resource monitoring and node isolation are in place.
🎯 Exploit Status
Exploitation requires ability to deploy containers in the Kubernetes cluster. No authentication bypass needed beyond normal container deployment permissions.
🛠️ Fix & Mitigation
✅ Official Fix
Patch Version: 2.0.5+ or 2.1.0+
Vendor Advisory: https://github.com/containerd/containerd/security/advisories/GHSA-cxfp-7pvr-95ff
Restart Required: Yes
Instructions:
1. Stop all containers and Kubernetes workloads. 2. Update containerd to version 2.0.5 or higher. 3. Restart containerd service. 4. Verify the fix by checking containerd version and testing usernamespaced pod deployment.
🔧 Temporary Workarounds
Disable usernamespaced pods
linuxTemporarily disable usernamespaced pods in Kubernetes to prevent exploitation while patching.
kubectl label nodes --all userns=disabled
Restart kubelet with --userns-disabled flag
🧯 If You Can't Patch
- Implement strict resource quotas and limits at the Kubernetes namespace level
- Isolate usernamespaced pods to dedicated nodes with enhanced monitoring
🔍 How to Verify
Check if Vulnerable:
Check containerd version and verify if usernamespaced pods are enabled in Kubernetes configuration.
Check Version:
containerd --version
Verify Fix Applied:
Deploy a test usernamespaced pod with resource limits and verify cgroup hierarchy placement using 'cat /proc/self/cgroup' inside the container.
📡 Detection & Monitoring
Log Indicators:
- Containerd logs showing usernamespaced container creation without proper cgroup placement
- Kubernetes events indicating resource limit violations
Network Indicators:
- Unusual resource consumption patterns from specific pods
SIEM Query:
source="containerd" AND "userns" AND NOT "cgroup"