defense 2026

PurSAMERE: Reliable Adversarial Purification via Sharpness-Aware Minimization of Expected Reconstruction Error

Vinh Hoang 1,2, Sebastian Krumscheid 3, Holger Rauhut 4, Raúl Tempone 5

0 citations · 26 references · arXiv (Cornell University)

α

Published on arXiv

2602.06269

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Demonstrates significant gains in adversarial robustness over state-of-the-art methods under strong deterministic white-box attacks, while avoiding the accuracy degradation observed in stochastic purification methods when the adversary has full knowledge of the system's randomness.

PurSAMERE

Novel technique introduced


We propose a novel deterministic purification method to improve adversarial robustness by mapping a potentially adversarial sample toward a nearby sample that lies close to a mode of the data distribution, where classifiers are more reliable. We design the method to be deterministic to ensure reliable test accuracy and to prevent the degradation of effective robustness observed in stochastic purification approaches when the adversary has full knowledge of the system and its randomness. We employ a score model trained by minimizing the expected reconstruction error of noise-corrupted data, thereby learning the structural characteristics of the input data distribution. Given a potentially adversarial input, the method searches within its local neighborhood for a purified sample that minimizes the expected reconstruction error under noise corruption and then feeds this purified sample to the classifier. During purification, sharpness-aware minimization is used to guide the purified samples toward flat regions of the expected reconstruction error landscape, thereby enhancing robustness. We further show that, as the noise level decreases, minimizing the expected reconstruction error biases the purified sample toward local maximizers of the Gaussian-smoothed density; under additional local assumptions on the score model, we prove recovery of a local maximizer in the small-noise limit. Experimental results demonstrate significant gains in adversarial robustness over state-of-the-art methods under strong deterministic white-box attacks.


Key Contributions

  • Deterministic adversarial purification method (PurSAMERE) that maps adversarial inputs toward local density maxima using a score model trained to minimize expected reconstruction error of noise-corrupted data
  • Integration of sharpness-aware minimization (SAM) during purification to drive purified samples toward flat regions of the expected reconstruction error landscape, enhancing robustness
  • Theoretical proof that minimizing expected reconstruction error biases purified samples toward local maximizers of the Gaussian-smoothed density in the small-noise limit

🛡️ Threat Analysis

Input Manipulation Attack

Proposes an adversarial purification defense that removes adversarial perturbations from inputs at inference time before classification, specifically designed to maintain robustness under strong deterministic white-box attacks — a direct defense against adversarial example (evasion) attacks.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_timedigital
Applications
image classification