attack 2025

BEAT: Visual Backdoor Attacks on VLM-based Embodied Agents via Contrastive Trigger Learning

Qiusi Zhan , Hyeonjeong Ha , Rui Yang , Sirui Xu , Hanyang Chen , Liang-Yan Gui , Yu-Xiong Wang , Huan Zhang , Heng Ji , Daniel Kang

1 citations · 57 references · arXiv

α

Published on arXiv

2510.27623

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BEAT achieves up to 80% attack success rate on VLM embodied agents while maintaining benign task performance, with CTL boosting backdoor activation by up to 39% over naive SFT under limited data.

BEAT (Contrastive Trigger Learning)

Novel technique introduced


Recent advances in Vision-Language Models (VLMs) have propelled embodied agents by enabling direct perception, reasoning, and planning task-oriented actions from visual inputs. However, such vision-driven embodied agents open a new attack surface: visual backdoor attacks, where the agent behaves normally until a visual trigger appears in the scene, then persistently executes an attacker-specified multi-step policy. We introduce BEAT, the first framework to inject such visual backdoors into VLM-based embodied agents using objects in the environments as triggers. Unlike textual triggers, object triggers exhibit wide variation across viewpoints and lighting, making them difficult to implant reliably. BEAT addresses this challenge by (1) constructing a training set that spans diverse scenes, tasks, and trigger placements to expose agents to trigger variability, and (2) introducing a two-stage training scheme that first applies supervised fine-tuning (SFT) and then our novel Contrastive Trigger Learning (CTL). CTL formulates trigger discrimination as preference learning between trigger-present and trigger-free inputs, explicitly sharpening the decision boundaries to ensure precise backdoor activation. Across various embodied agent benchmarks and VLMs, BEAT achieves attack success rates up to 80%, while maintaining strong benign task performance, and generalizes reliably to out-of-distribution trigger placements. Notably, compared to naive SFT, CTL boosts backdoor activation accuracy up to 39% under limited backdoor data. These findings expose a critical yet unexplored security risk in VLM-based embodied agents, underscoring the need for robust defenses before real-world deployment.


Key Contributions

  • BEAT framework: first backdoor attack on VLM-based embodied agents using environmental objects as visual triggers, with a diverse scene/task/placement training set to handle real-world trigger variability
  • Contrastive Trigger Learning (CTL): a two-stage fine-tuning scheme that formulates trigger discrimination as preference learning (trigger-present vs. trigger-free inputs), sharpening decision boundaries for reliable backdoor activation
  • Achieves up to 80% attack success rate across embodied agent benchmarks and VLMs, with CTL boosting backdoor accuracy up to 39% over naive SFT under limited backdoor data

🛡️ Threat Analysis

Model Poisoning

BEAT injects hidden, trigger-activated behavior into VLM-based embodied agents via fine-tuning: the agent behaves normally until a specific visual object trigger appears, then executes an attacker-specified multi-step policy — a textbook backdoor/trojan attack. The novel CTL mechanism sharpens trigger discrimination during training to reliably activate the backdoor.


Details

Domains
visionmultimodalreinforcement-learning
Model Types
vlmmultimodal
Threat Tags
training_timetargeteddigitalwhite_box
Datasets
embodied agent benchmarks (unspecified in abstract)
Applications
vlm-based embodied agentsautonomous robot planningvision-language navigation