attack 2025

Attention-Guided Patch-Wise Sparse Adversarial Attacks on Vision-Language-Action Models

Naifu Zhang 1, Wei Tao 2, Xi Xiao 1, Qianpu Sun 1, Yuxin Zheng 1, Wentao Mo 1, Peiqiang Wang 1, Nan Zhang 3

1 citations · 22 references · arXiv

α

Published on arXiv

2511.21663

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Under L∞=4/255, ADVLA with Top-K masking modifies less than 10% of image patches while achieving a near-100% attack success rate, significantly outperforming conventional patch-based attacks in speed and stealth.

ADVLA

Novel technique introduced


In recent years, Vision-Language-Action (VLA) models in embodied intelligence have developed rapidly. However, existing adversarial attack methods require costly end-to-end training and often generate noticeable perturbation patches. To address these limitations, we propose ADVLA, a framework that directly applies adversarial perturbations on features projected from the visual encoder into the textual feature space. ADVLA efficiently disrupts downstream action predictions under low-amplitude constraints, and attention guidance allows the perturbations to be both focused and sparse. We introduce three strategies that enhance sensitivity, enforce sparsity, and concentrate perturbations. Experiments demonstrate that under an $L_{\infty}=4/255$ constraint, ADVLA combined with Top-K masking modifies less than 10% of the patches while achieving an attack success rate of nearly 100%. The perturbations are concentrated on critical regions, remain almost imperceptible in the overall image, and a single-step iteration takes only about 0.06 seconds, significantly outperforming conventional patch-based attacks. In summary, ADVLA effectively weakens downstream action predictions of VLA models under low-amplitude and locally sparse conditions, avoiding the high training costs and conspicuous perturbations of traditional patch attacks, and demonstrates unique effectiveness and practical value for attacking VLA feature spaces.


Key Contributions

  • ADVLA framework that attacks VLA models by perturbing projected visual features in textual feature space using PGD, avoiding costly end-to-end training
  • Three attention-guided strategies: gradient-weighted updates, sparse Top-K mask updates, and key-patch-focused loss computation to achieve sparse and imperceptible perturbations
  • Achieves ~100% attack success rate under L∞=4/255 constraint while modifying fewer than 10% of patches, with each iteration taking ~0.06 seconds

🛡️ Threat Analysis

Input Manipulation Attack

ADVLA crafts gradient-based adversarial perturbations (PGD) on visual inputs at inference time, causing VLA models to produce incorrect action predictions — a textbook adversarial example attack with novel feature-space and attention-guided sparsity enhancements.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
grey_boxinference_timetargeteddigital
Datasets
LIBERO
Applications
robotic manipulationembodied ai systemsvision-language-action models