CVE-2025-46153

5.3 MEDIUM

📋 TL;DR

This vulnerability in PyTorch versions before 3.7.0 affects the bernoulli_p decompose function, causing inconsistency with CPU implementations that negatively impacts dropout layers when fallback_random=True is used. This can lead to incorrect model behavior during training or inference. Users running PyTorch-based machine learning applications with dropout layers are affected.

💻 Affected Systems

Products:
  • PyTorch
Versions: All versions before 3.7.0
Operating Systems: All platforms running PyTorch
Default Config Vulnerable: ⚠️ Yes
Notes: Only affects models using nn.Dropout1d, nn.Dropout2d, or nn.Dropout3d layers with fallback_random=True parameter.

📦 What is this software?

⚠️ Risk & Real-World Impact

🔴

Worst Case

Machine learning models produce incorrect outputs during training or inference, potentially leading to degraded model performance, incorrect predictions, or data corruption in production systems.

🟠

Likely Case

Models using dropout layers exhibit unexpected behavior or reduced accuracy during training, requiring retraining or debugging to identify the root cause.

🟢

If Mitigated

Minor performance degradation or non-critical inconsistencies in model outputs that don't affect core functionality.

🌐 Internet-Facing: LOW - This is primarily a functional correctness issue affecting model training/inference rather than a traditional security vulnerability that enables unauthorized access or data exfiltration.
🏢 Internal Only: MEDIUM - Could impact internal ML pipelines, research experiments, or production inference systems relying on accurate model outputs.

🎯 Exploit Status

Public PoC: ✅ No
Weaponized: UNKNOWN
Unauthenticated Exploit: ✅ No
Complexity: MEDIUM

This is a functional bug rather than a traditional security exploit. Attackers would need to influence model training or inference to potentially cause harm.

🛠️ Fix & Mitigation

✅ Official Fix

Patch Version: PyTorch 3.7.0 and later

Vendor Advisory: https://github.com/pytorch/pytorch/issues/142853

Restart Required: No

Instructions:

1. Update PyTorch using pip: 'pip install torch==3.7.0' or 'pip install --upgrade torch' 2. Verify the update with 'python -c "import torch; print(torch.__version__)"' 3. Ensure version is 3.7.0 or higher

🔧 Temporary Workarounds

Disable fallback_random

all

Set fallback_random=False in dropout layers to avoid the problematic code path

In your PyTorch code, modify dropout layers: nn.DropoutXd(fallback_random=False)

Avoid affected dropout layers

all

Use alternative dropout implementations or regularization methods

Consider using nn.Dropout() instead of nn.Dropout1d/2d/3d

🧯 If You Can't Patch

  • Implement input validation and output monitoring for ML models to detect anomalous behavior
  • Use model versioning and rollback capabilities to revert if models exhibit unexpected behavior

🔍 How to Verify

Check if Vulnerable:

Check PyTorch version and inspect code for dropout layers with fallback_random=True: 'python -c "import torch; print(torch.__version__)"' and review model definitions

Check Version:

python -c "import torch; print(torch.__version__)"

Verify Fix Applied:

After updating, verify version is 3.7.0+ and test dropout layers with fallback_random=True to ensure consistent behavior

📡 Detection & Monitoring

Log Indicators:

  • Unexpected model accuracy drops during training
  • Inconsistent inference outputs from ML models
  • Error logs related to dropout operations

Network Indicators:

  • N/A - This is a local functional issue

SIEM Query:

N/A - This is not a network-exploitable vulnerability

🔗 References

📤 Share & Export