A unified Bayesian framework for adversarial robustness
Pablo G. Arce 1,2, Roi Naveiro 3, David Ríos Insua 1
Published on arXiv
2510.09288
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
The Bayesian framework recovers prior defenses as special cases and empirically demonstrates that explicitly modeling adversarial uncertainty improves robustness over deterministic defenses.
Bayesian Adversarial Robustness Framework
Novel technique introduced
The vulnerability of machine learning models to adversarial attacks remains a critical security challenge. Traditional defenses, such as adversarial training, typically robustify models by minimizing a worst-case loss. However, these deterministic approaches do not account for uncertainty in the adversary's attack. While stochastic defenses placing a probability distribution on the adversary exist, they often lack statistical rigor and fail to make explicit their underlying assumptions. To resolve these issues, we introduce a formal Bayesian framework that models adversarial uncertainty through a stochastic channel, articulating all probabilistic assumptions. This yields two robustification strategies: a proactive defense enacted during training, aligned with adversarial training, and a reactive defense enacted during operations, aligned with adversarial purification. Several previous defenses can be recovered as limiting cases of our model. We empirically validate our methodology, showcasing the benefits of explicitly modeling adversarial uncertainty.
Key Contributions
- A statistically grounded Bayesian framework that models adversarial uncertainty via a stochastic channel, making all probabilistic assumptions explicit
- Two derived defense strategies: a proactive defense (training-time, generalizing adversarial training) and a reactive defense (inference-time, generalizing adversarial purification)
- Demonstration that prominent prior defenses (adversarial training, randomized smoothing, diffusion-based purification) are recoverable as limiting cases of the unified model
🛡️ Threat Analysis
The paper directly targets adversarial evasion attacks (input manipulation at inference time) and proposes both a proactive defense (training-time, aligned with adversarial training) and a reactive defense (inference-time, aligned with adversarial purification) against them. The entire framework is built around defending against adversarial examples.