defense 2026

STRAP-ViT: Segregated Tokens with Randomized -- Transformations for Defense against Adversarial Patches in ViTs

Nandish Chattopadhyay 1, Anadi Goyal 1, Chandan Karfa 1, Anupam Chattopadhyay 2

0 citations

α

Published on arXiv

2603.12688

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Provides robust accuracies within 2-3% of clean baselines against multiple adversarial patch attacks (Adversarial Patch, LAVAN, GDPA, RP2) on ViT-base-16 and DinoV2

STRAP-ViT

Novel technique introduced


Adversarial patches are physically realizable localized noise, which are able to hijack Vision Transformers (ViT) self-attention, pulling focus toward a small, high-contrast region and corrupting the class token to force confident misclassifications. In this paper, we claim that the tokens which correspond to the areas of the image that contain the adversarial noise, have different statistical properties when compared to the tokens which do not overlap with the adversarial perturbations. We use this insight to propose a mechanism, called STRAP-ViT, which uses Jensen-Shannon Divergence as a metric for segregating tokens that behave as anomalies in the Detection Phase, and then apply randomized composite transformations on them during the Mitigation Phase to make the adversarial noise ineffective. The minimum number of tokens to transform is a hyper-parameter for the defense mechanism and is chosen such that at least 50% of the patch is covered by the transformed tokens. STRAP-ViT fits as a non-trainable plug-and-play block within the ViT architectures, for inference purposes only, with a minimal computational cost and does not require any additional training cost/effort. STRAP-ViT has been tested on multiple pre-trained vision transformer architectures (ViT-base-16 and DinoV2) and datasets (ImageNet and CalTech-101), across multiple adversarial attacks (Adversarial Patch, LAVAN, GDPA and RP2), and found to provide excellent robust accuracies lying within a 2-3% range of the clean baselines, and outperform the state-of-the-art.


Key Contributions

  • Non-trainable plug-and-play defense mechanism using Jensen-Shannon Divergence to detect adversarial patch tokens
  • Two-phase approach: Detection (segregating anomalous tokens) and Mitigation (randomized composite transformations)
  • Achieves robust accuracy within 2-3% of clean baseline across multiple ViT architectures and patch attacks

🛡️ Threat Analysis

Input Manipulation Attack

Defense against adversarial patch attacks (physically realizable localized perturbations) targeting Vision Transformers at inference time. The paper proposes STRAP-ViT, a detection-and-mitigation mechanism that segregates tokens corresponding to adversarial patches and transforms them to neutralize the attack.


Details

Domains
vision
Model Types
transformer
Threat Tags
inference_timedigitalphysicaltargeted
Datasets
ImageNetCalTech-101
Applications
image classification