attack 2026

TRAP: Hijacking VLA CoT-Reasoning via Adversarial Patches

Zhengxian Huang 1, Wenjun Zhu 1, Haoxuan Qiu 2, Xiaoyu Ji 1, Wenyuan Xu 1

0 citations

α

Published on arXiv

2603.23117

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Successfully hijacks VLA robotic control across 3 mainstream architectures using physical adversarial patches printed on paper, demonstrating targeted control hijacking without modifying user instructions

TRAP

Novel technique introduced


By integrating Chain-of-Thought(CoT) reasoning, Vision-Language-Action (VLA) models have demonstrated strong capabilities in robotic manipulation, particularly by improving generalization and interpretability. However, the security of CoT-based reasoning mechanisms remains largely unexplored. In this paper, we show that CoT reasoning introduces a novel attack vector for targeted control hijacking--for example, causing a robot to mistakenly deliver a knife to a person instead of an apple--without modifying the user's instruction. We first provide empirical evidence that CoT strongly governs action generation, even when it is semantically misaligned with the input instructions. Building on this observation, we propose TRAP, the first targeted adversarial attack framework for CoT-reasoning VLA models. TRAP uses an adversarial patch (e.g., a coaster placed on the table) to corrupt intermediate CoT reasoning and hijack the VLA's output. By optimizing the CoT adversarial loss, TRAP induces specific and adversary-defined behaviors. Extensive evaluations across 3 mainstream VLA architectures and 3 CoT reasoning paradigms validate the effectiveness of TRAP. Notably, we implemented the patch by printing it on paper in a real-world setting. Our findings highlight the urgent need to secure CoT reasoning in VLA systems.


Key Contributions

  • First targeted adversarial attack framework specifically exploiting Chain-of-Thought reasoning in VLA models
  • Novel CoT adversarial loss function that corrupts intermediate reasoning to hijack robot actions
  • Physical adversarial patch validated in real-world robotic manipulation scenarios across 3 VLA architectures and 3 CoT paradigms

🛡️ Threat Analysis

Input Manipulation Attack

TRAP uses adversarial patches (physical and digital) to craft inputs that manipulate VLA model behavior at inference time, causing misclassification and incorrect robotic actions. The attack optimizes patch perturbations to corrupt intermediate reasoning and hijack outputs, which is the core definition of input manipulation attacks.


Details

Domains
multimodalvisionnlp
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timetargetedphysicaldigitalwhite_box
Applications
robotic manipulationembodied aivision-language-action systems