benchmark 2025

AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models

Jiayu Li 1, Yunhan Zhao 1, Xiang Zheng 2, Zonghuan Xu 1, Yige Li 3, Xingjun Ma 1, Yu-Gang Jiang 1

1 citations · 25 references · arXiv

α

Published on arXiv

2511.12149

Input Manipulation Attack

OWASP ML Top 10 — ML01

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BackdoorVLA achieves 58.4% average targeted success rate (100% on selected tasks) in simulated and real-world robotic settings, demonstrating feasibility of precise long-horizon adversarial control of VLA-based robots

BackdoorVLA

Novel technique introduced


Vision-Language-Action (VLA) models enable robots to interpret natural-language instructions and perform diverse tasks, yet their integration of perception, language, and control introduces new safety vulnerabilities. Despite growing interest in attacking such models, the effectiveness of existing techniques remains unclear due to the absence of a unified evaluation framework. One major issue is that differences in action tokenizers across VLA architectures hinder reproducibility and fair comparison. More importantly, most existing attacks have not been validated in real-world scenarios. To address these challenges, we propose AttackVLA, a unified framework that aligns with the VLA development lifecycle, covering data construction, model training, and inference. Within this framework, we implement a broad suite of attacks, including all existing attacks targeting VLAs and multiple adapted attacks originally developed for vision-language models, and evaluate them in both simulation and real-world settings. Our analysis of existing attacks reveals a critical gap: current methods tend to induce untargeted failures or static action states, leaving targeted attacks that drive VLAs to perform precise long-horizon action sequences largely unexplored. To fill this gap, we introduce BackdoorVLA, a targeted backdoor attack that compels a VLA to execute an attacker-specified long-horizon action sequence whenever a trigger is present. We evaluate BackdoorVLA in both simulated benchmarks and real-world robotic settings, achieving an average targeted success rate of 58.4% and reaching 100% on selected tasks. Our work provides a standardized framework for evaluating VLA vulnerabilities and demonstrates the potential for precise adversarial manipulation, motivating further research on securing VLA-based embodied systems.


Key Contributions

  • AttackVLA: a unified evaluation framework spanning the VLA development lifecycle (data construction, training, inference) that resolves action-tokenizer incompatibilities enabling fair cross-architecture comparison
  • Comprehensive benchmark of all existing VLA attacks plus adapted vision-language model attacks, evaluated in both simulation and real-world robotic settings, revealing that current attacks only induce untargeted or static failures
  • BackdoorVLA: the first targeted backdoor attack compelling a VLA to execute precise attacker-specified long-horizon action sequences upon trigger activation, achieving 58.4% average and up to 100% success rate

🛡️ Threat Analysis

Input Manipulation Attack

Evaluates inference-time adversarial attacks on VLA models, including attacks adapted from vision-language models that manipulate the perception pipeline to cause incorrect or harmful robot actions.

Model Poisoning

Introduces BackdoorVLA, a targeted trojan attack that embeds a trigger causing the VLA model to execute a specific attacker-defined long-horizon action sequence, achieving 58.4% average and 100% targeted success rate on selected tasks.


Details

Domains
visionmultimodalreinforcement-learning
Model Types
vlmtransformermultimodal
Threat Tags
training_timeinference_timetargeteduntargeteddigitalphysical
Applications
robot manipulationembodied aivision-language-action models