Goal-oriented Backdoor Attack against Vision-Language-Action Models via Physical Objects
Zirun Zhou 1, Zhengyang Xiao 1, Haochuan Xu 1, Jing Sun 1, Di Wang 2, Jingfeng Zhang 1,2
Published on arXiv
2510.09269
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
GoBA achieves 97.0% backdoor success rate on triggered inputs while causing 0.0% performance degradation on clean inputs across diverse physical trigger objects.
GoBA
Novel technique introduced
Recent advances in vision-language-action (VLA) models have greatly improved embodied AI, enabling robots to follow natural language instructions and perform diverse tasks. However, their reliance on uncurated training datasets raises serious security concerns. Existing backdoor attacks on VLAs mostly assume white-box access and result in task failures instead of enforcing specific actions. In this work, we reveal a more practical threat: attackers can manipulate VLAs by simply injecting physical objects as triggers into the training dataset. We propose goal-oriented backdoor attacks (GoBA), where the VLA behaves normally in the absence of physical triggers but executes predefined and goal-oriented actions in the presence of physical triggers. Specifically, based on a popular VLA benchmark LIBERO, we introduce BadLIBERO that incorporates diverse physical triggers and goal-oriented backdoor actions. In addition, we propose a three-level evaluation that categorizes the victim VLA's actions under GoBA into three states: nothing to do, try to do, and success to do. Experiments show that GoBA enables the victim VLA to successfully achieve the backdoor goal in 97 percentage of inputs when the physical trigger is present, while causing zero performance degradation on clean inputs. Finally, by investigating factors related to GoBA, we find that the action trajectory and trigger color significantly influence attack performance, while trigger size has surprisingly little effect. The code and BadLIBERO dataset are accessible via the project page at https://goba-attack.github.io/.
Key Contributions
- GoBA: a black-box, data-poisoning backdoor attack that uses physical objects as triggers to enforce goal-oriented actions in VLA models without requiring any model access
- BadLIBERO: a new benchmark dataset built on LIBERO with diverse physical triggers and corresponding backdoor action trajectories
- Three-level evaluation framework (nothing to do / try to do / success to do) for fine-grained assessment of backdoor success in robotic task settings, plus empirical analysis of factors (trajectory design, trigger color/size/graspability) affecting attack performance
🛡️ Threat Analysis
GoBA is a classic backdoor/trojan attack: physical trigger objects injected into training data cause the VLA to behave normally on clean inputs but execute predefined, targeted malicious actions (e.g., picking up the trigger object and placing it elsewhere) when the trigger is present — exactly the trigger-activated hidden behavior that defines ML10.