attack 2025

TabVLA: Targeted Backdoor Attacks on Vision-Language-Action Models

Zonghuan Xu 1, Jiayu Li 1, Yunhan Zhao 1, Xiang Zheng 2, Xingjun Ma 1, Yu-Gang Jiang 1

2 citations · 34 references · arXiv (Cornell University)

α

Published on arXiv

2510.10932

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Vision-only backdoor poisoning achieves 98.67–99.83% attack success rate with only 0.31% poisoned episodes while preserving 98.50–99.17% clean-task retention on OpenVLA-7B.

DropVLA

Novel technique introduced


With the growing deployment of Vision-Language-Action (VLA) models in real-world embodied AI systems, their increasing vulnerability to backdoor attacks poses a serious safety threat. A backdoored VLA agent can be covertly triggered by a pre-injected backdoor to execute adversarial actions, potentially causing system failures or even physical harm. Although backdoor attacks on VLA models have been explored, prior work has focused only on untargeted attacks, leaving the more practically threatening scenario of targeted manipulation unexamined. In this paper, we study targeted backdoor attacks on VLA models and introduce TabVLA, a novel framework that enables such attacks via black-box fine-tuning. TabVLA explores two deployment-relevant inference-time threat models: input-stream editing and in-scene triggering. It formulates poisoned data generation as an optimization problem to improve attack effectivess. Experiments with OpenVLA-7B on the LIBERO benchmark reveal that the vision channel is the principal attack surface: targeted backdoors succeed with minimal poisoning, remain robust across variations in trigger design, and are degraded only by positional mismatches between fine-tuning and inference triggers. We also investigate a potential detection-based defense against TabVLA, which reconstructs latent visual triggers from the input stream to flag activation-conditioned backdoor samples. Our work highlights the vulnerability of VLA models to targeted backdoor manipulation and underscores the need for more advanced defenses.


Key Contributions

  • Introduces action-level backdoor threat model for VLA models targeting reusable action primitives (e.g., open_gripper) with temporally precise activation, distinct from prior task-hijacking paradigms.
  • Proposes DropVLA/TabVLA, a window-consistent relabeling scheme enabling effective backdoor injection under a realistic pipeline-black-box fine-tuning setting with minimal poisoning (0.31% episodes).
  • Demonstrates that the visual channel is the dominant attack surface: vision-only triggers achieve 98.67–99.83% ASR, text-only triggers are unstable, and validates physical-world feasibility on a 7-DoF Franka arm.

🛡️ Threat Analysis

Model Poisoning

Proposes a trigger-based backdoor (DropVLA/TabVLA) injected via fine-tuning data poisoning that causes a VLA model to execute attacker-specified action primitives when a visual trigger is present, while preserving nominal task performance — the defining characteristic of a backdoor/trojan attack.


Details

Domains
multimodalreinforcement-learning
Model Types
vlmmultimodal
Threat Tags
black_boxtraining_timetargeted
Datasets
LIBERO
Applications
robotic manipulationembodied airobot control