attack 2025

When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models

Hui Lu 1,2, Yi Yu 1, Yiming Yang 1, Chenyu Yi 1, Qixing Zhang 2, Bingquan Shen 1, Alex Kot 1, Xudong Jiang 1

1 citations · 91 references · arXiv

α

Published on arXiv

2511.21192

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

UPA-RFAS consistently transfers adversarial patches across diverse VLA models, manipulation tasks, and viewpoints including physical robot executions, establishing a practical black-box patch attack baseline.

UPA-RFAS

Novel technique introduced


Vision-Language-Action (VLA) models are vulnerable to adversarial attacks, yet universal and transferable attacks remain underexplored, as most existing patches overfit to a single model and fail in black-box settings. To address this gap, we present a systematic study of universal, transferable adversarial patches against VLA-driven robots under unknown architectures, finetuned variants, and sim-to-real shifts. We introduce UPA-RFAS (Universal Patch Attack via Robust Feature, Attention, and Semantics), a unified framework that learns a single physical patch in a shared feature space while promoting cross-model transfer. UPA-RFAS combines (i) a feature-space objective with an $\ell_1$ deviation prior and repulsive InfoNCE loss to induce transferable representation shifts, (ii) a robustness-augmented two-phase min-max procedure where an inner loop learns invisible sample-wise perturbations and an outer loop optimizes the universal patch against this hardened neighborhood, and (iii) two VLA-specific losses: Patch Attention Dominance to hijack text$\to$vision attention and Patch Semantic Misalignment to induce image-text mismatch without labels. Experiments across diverse VLA models, manipulation suites, and physical executions show that UPA-RFAS consistently transfers across models, tasks, and viewpoints, exposing a practical patch-based attack surface and establishing a strong baseline for future defenses.


Key Contributions

  • UPA-RFAS framework learning a single universal adversarial patch transferable across unknown VLA architectures, finetuned variants, and sim-to-real shifts
  • Robustness-augmented two-phase min-max optimization with inner-loop sample-wise perturbations and outer-loop universal patch training against hardened neighborhoods
  • Two VLA-specific losses — Patch Attention Dominance (hijacking text-to-vision attention) and Patch Semantic Misalignment (inducing image-text mismatch) — enabling label-free transferable attacks

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is adversarial patch generation — physical/digital inputs crafted to manipulate model outputs at inference time across diverse VLA architectures and tasks.


Details

Domains
visionmultimodalreinforcement-learning
Model Types
vlmtransformermultimodal
Threat Tags
black_boxgrey_boxinference_timetargeteddigitalphysical
Applications
robotic manipulationvision-language-action modelsautonomous robots