defense 2026

Efficient Preemptive Robustification with Image Sharpening

Jiaming Liang , Chi-Man Pun

0 citations

α

Published on arXiv

2603.25244

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Image sharpening achieves remarkable robustness gains with low computational cost, especially in transfer attack scenarios

Laplacian Sharpening

Novel technique introduced


Despite their great success, deep neural networks rely on high-dimensional, non-robust representations, making them vulnerable to imperceptible perturbations, even in transfer scenarios. To address this, both training-time defenses (e.g., adversarial training and robust architecture design) and post-attack defenses (e.g., input purification and adversarial detection) have been extensively studied. Recently, a limited body of work has preliminarily explored a pre-attack defense paradigm, termed preemptive robustification, which introduces subtle modifications to benign samples prior to attack to proactively resist adversarial perturbations. Unfortunately, their practical applicability remains questionable due to several limitations, including (1) reliance on well-trained classifiers as surrogates to provide robustness priors, (2) substantial computational overhead arising from iterative optimization or trained generators for robustification, and (3) limited interpretability of the optimization- or generation-based robustification processes. Inspired by recent studies revealing a positive correlation between texture intensity and the robustness of benign samples, we show that image sharpening alone can efficiently robustify images. To the best of our knowledge, this is the first surrogate-free, optimization-free, generator-free, and human-interpretable robustification approach. Extensive experiments demonstrate that sharpening yields remarkable robustness gains with low computational cost, especially in transfer scenarios.


Key Contributions

  • First surrogate-free, optimization-free, generator-free preemptive robustification method using image sharpening
  • Human-interpretable defense leveraging correlation between texture intensity and adversarial robustness
  • Low computational overhead compared to iterative optimization or neural generator-based robustification

🛡️ Threat Analysis

Input Manipulation Attack

Paper proposes a defense against adversarial examples (input manipulation attacks) by preemptively modifying benign images via sharpening to resist adversarial perturbations. Evaluated against gradient-based attacks (MA attack) in transfer scenarios.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
inference_timedigitalblack_box
Applications
image classification