defense arXiv Nov 7, 2025 · Nov 2025
Jun Li, Yanwei Xu, Keran Li et al. · Jilin University of Finance and Economics · Center for Artificial Intelligence +1 more
Detects adversarial examples via sliding-window occlusion confidence entropy, achieving up to 96.5% detection on CIFAR-10 across nine attacks
Input Manipulation Attack vision
Understanding intrinsic differences between adversarial examples and clean samples is key to enhancing DNN robustness and detection against adversarial attacks. This study first empirically finds that image-based adversarial examples are notably sensitive to occlusion. Controlled experiments on CIFAR-10 used nine canonical attacks (e.g., FGSM, PGD) to generate adversarial examples, paired with original samples for evaluation. We introduce Sliding Mask Confidence Entropy (SMCE) to quantify model confidence fluctuation under occlusion. Using 1800+ test images, SMCE calculations supported by Mask Entropy Field Maps and statistical distributions show adversarial examples have significantly higher confidence volatility under occlusion than originals. Based on this, we propose Sliding Window Mask-based Adversarial Example Detection (SWM-AED), which avoids catastrophic overfitting of conventional adversarial training. Evaluations across classifiers and attacks on CIFAR-10 demonstrate robust performance, with accuracy over 62% in most cases and up to 96.5%.
cnn Jilin University of Finance and Economics · Center for Artificial Intelligence · Jilin University