defense 2025

SAFER-AiD: Saccade-Assisted Foveal-peripheral vision Enhanced Reconstruction for Adversarial Defense

Jiayang Liu 1, Daniel Tso 2, Yiming Bu 1, Qinru Qiu 1

0 citations · 31 references · arXiv

α

Published on arXiv

2510.08761

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Improves adversarial robustness across diverse classifiers and attack types on ImageNet while significantly reducing training overhead compared to both biologically and non-biologically inspired defenses.

SAFER-AiD

Novel technique introduced


Adversarial attacks significantly challenge the safe deployment of deep learning models, particularly in real-world applications. Traditional defenses often rely on computationally intensive optimization (e.g., adversarial training or data augmentation) to improve robustness, whereas the human visual system achieves inherent robustness to adversarial perturbations through evolved biological mechanisms. We hypothesize that attention guided non-homogeneous sparse sampling and predictive coding plays a key role in this robustness. To test this hypothesis, we propose a novel defense framework incorporating three key biological mechanisms: foveal-peripheral processing, saccadic eye movements, and cortical filling-in. Our approach employs reinforcement learning-guided saccades to selectively capture multiple foveal-peripheral glimpses, which are integrated into a reconstructed image before classification. This biologically inspired preprocessing effectively mitigates adversarial noise, preserves semantic integrity, and notably requires no retraining or fine-tuning of downstream classifiers, enabling seamless integration with existing systems. Experiments on the ImageNet dataset demonstrate that our method improves system robustness across diverse classifiers and attack types, while significantly reducing training overhead compared to both biologically and non-biologically inspired defense techniques.


Key Contributions

  • Biologically-inspired preprocessing framework combining foveal-peripheral processing, saccadic eye movements, and cortical filling-in to remove adversarial noise before classification
  • Reinforcement learning-guided saccade mechanism that selectively captures multiple foveal-peripheral glimpses and integrates them into a reconstructed clean image
  • Plug-and-play defense requiring no retraining or fine-tuning of downstream classifiers, enabling seamless integration with existing systems

🛡️ Threat Analysis

Input Manipulation Attack

Proposes an input purification defense that mitigates adversarial perturbations at inference time by reconstructing images through biologically-inspired foveal-peripheral glimpse integration before passing them to classifiers.


Details

Domains
vision
Model Types
cnntransformerrl
Threat Tags
inference_timedigital
Datasets
ImageNet
Applications
image classification