CVE-2024-9779

7.5 HIGH

📋 TL;DR

This vulnerability in Open Cluster Management allows attackers with access to worker nodes to steal service account tokens and gain full cluster control. It affects OCM deployments where cluster-manager or klusterlet pods run on compromised nodes. Attackers can escalate privileges to control the entire Kubernetes cluster.

💻 Affected Systems

Products:
  • Open Cluster Management (OCM)
Versions: Versions before v0.13.0
Operating Systems: Linux
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects deployments where cluster-manager or klusterlet pods run on nodes accessible to attackers. Requires attacker to have node-level access.

⚠️ Manual Verification Required

This CVE does not have specific version information in our database, so automatic vulnerability detection cannot determine if your system is affected.

Why? The CVE database entry doesn't specify which versions are vulnerable (no version ranges provided by the vendor/NVD).

🔒 Custom verification scripts are available for registered users. Sign up free to download automated test scripts.

Recommended Actions:
  1. Review the CVE details at NVD
  2. Check vendor security advisories for your specific version
  3. Test if the vulnerability is exploitable in your environment
  4. Consider updating to the latest version as a precaution

⚠️ Risk & Real-World Impact

🔴

Worst Case

Complete cluster compromise allowing attackers to deploy malicious workloads, exfiltrate sensitive data, disrupt operations, and maintain persistent access.

🟠

Likely Case

Privilege escalation leading to unauthorized access to cluster resources, potential data theft, and lateral movement within the environment.

🟢

If Mitigated

Limited impact if proper node security controls, network segmentation, and least-privilege service accounts are implemented.

🌐 Internet-Facing: LOW - This requires access to worker nodes, which typically shouldn't be internet-facing in properly configured environments.
🏢 Internal Only: HIGH - Attackers with internal network access to worker nodes can exploit this vulnerability to gain cluster-wide control.

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: LIKELY
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

Exploitation requires node access and knowledge of Kubernetes service account token mounting. The technique is well-documented in Kubernetes security contexts.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: v0.13.0

Vendor Advisory: https://access.redhat.com/security/cve/CVE-2024-9779

Restart Required: Yes

Instructions:

1. Upgrade Open Cluster Management to version 0.13.0 or later. 2. Update cluster-manager and klusterlet deployments. 3. Verify service account permissions are properly restricted. 4. Restart affected pods.

🔧 Temporary Workarounds

Restrict Node Access

all

Implement strict access controls to worker nodes and ensure only authorized personnel can access them.

Limit Service Account Permissions

linux

Review and reduce permissions of the cluster-manager service account to minimum required.

kubectl get clusterrole cluster-manager -o yaml
kubectl edit clusterrole cluster-manager

🧯 If You Can't Patch

  • Implement strict network segmentation to isolate worker nodes from untrusted networks
  • Apply Kubernetes Pod Security Standards to restrict pod capabilities and service account token mounting

🔍 How to Verify

Check if Vulnerable:

Check OCM version: kubectl get deployment -n open-cluster-management cluster-manager -o jsonpath='{.spec.template.spec.containers[0].image}'. If version is below 0.13.0, you are vulnerable.

Check Version:

kubectl get deployment -n open-cluster-management cluster-manager -o jsonpath='{.spec.template.spec.containers[0].image}' | grep -o ':[0-9.]*'

Verify Fix Applied:

Verify OCM version is 0.13.0 or higher and check cluster-manager service account permissions are properly restricted.

📡 Detection & Monitoring

Log Indicators:

  • Unauthorized pod creation events
  • Service account token access from unexpected nodes
  • ClusterRoleBinding modifications

Network Indicators:

  • Unexpected API server requests from worker nodes
  • Lateral movement patterns within the cluster

SIEM Query:

source="kubernetes-audit" AND verb="create" AND resource="pods" AND user.username="system:serviceaccount:open-cluster-management:cluster-manager"

🔗 References

📤 Share & Export