attack 2026

State Backdoor: Towards Stealthy Real-world Poisoning Attack on Vision-Language-Action Model in State Space

Ji Guo 1, Wenbo Jiang 1, Yansong Lin 1, Yijing Liu 1, Ruichen Zhang 2, Guomin Lu 1, Aiguo Chen 1, Xinshuo Han 3, Hongwei Li 1, Dusit Niyato 2

1 citations · 45 references · arXiv

α

Published on arXiv

2601.04266

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

State Backdoor achieves over 90% attack success rate across five VLA models and five real-world robotic tasks without degrading clean task performance.

State Backdoor / Preference-guided Genetic Algorithm (PGA)

Novel technique introduced


Vision-Language-Action (VLA) models are widely deployed in safety-critical embodied AI applications such as robotics. However, their complex multimodal interactions also expose new security vulnerabilities. In this paper, we investigate a backdoor threat in VLA models, where malicious inputs cause targeted misbehavior while preserving performance on clean data. Existing backdoor methods predominantly rely on inserting visible triggers into visual modality, which suffer from poor robustness and low insusceptibility in real-world settings due to environmental variability. To overcome these limitations, we introduce the State Backdoor, a novel and practical backdoor attack that leverages the robot arm's initial state as the trigger. To optimize trigger for insusceptibility and effectiveness, we design a Preference-guided Genetic Algorithm (PGA) that efficiently searches the state space for minimal yet potent triggers. Extensive experiments on five representative VLA models and five real-world tasks show that our method achieves over 90% attack success rate without affecting benign task performance, revealing an underexplored vulnerability in embodied AI systems.


Key Contributions

  • State Backdoor: uses robot arm's initial proprioceptive state as a stealthy, environment-stable backdoor trigger instead of fragile visual triggers
  • Preference-guided Genetic Algorithm (PGA) that searches the state space for minimal yet potent triggers optimized for stealthiness and effectiveness
  • Evaluation across five representative VLA models and five real-world robotic tasks, achieving >90% attack success rate with no benign performance degradation

🛡️ Threat Analysis

Data Poisoning Attack

The attack is implemented via data poisoning — selecting training data subsets, injecting triggered states, and relabeling corresponding actions to attacker-defined targets.

Model Poisoning

Core contribution is a backdoor attack embedding hidden trigger-activated targeted misbehavior in VLA models; the robot arm's initial state acts as the trigger, causing malicious actions only when triggered while preserving normal benign performance.


Details

Domains
visionmultimodal
Model Types
vlmmultimodaltransformer
Threat Tags
training_timetargetedphysical
Applications
roboticsembodied aivla model control systems