TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models
Zhifang Zhang 1, Qiqi Tao 2, Jiaqi Lv 1, Na Zhao 2, Lei Feng 1, Joey Tianyi Zhou 3
Published on arXiv
2509.24566
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
TokenSwap achieves high attack success rates across multiple LVLM architectures (LLaVA, Qwen-VL) while evading min-k perplexity-based backdoor detectors that easily catch existing fixed-pattern attacks.
TokenSwap
Novel technique introduced
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable to backdoor attacks. Existing backdoor attacks on LVLMs aim to force the victim model to generate a predefined target pattern, which is either inserted into or replaces the original content. We find that these fixed-pattern attacks are relatively easy to detect, because the attacked LVLM tends to memorize such frequent patterns in the training dataset, thereby exhibiting overconfidence on these targets given poisoned inputs. To address these limitations, we introduce TokenSwap, a more evasive and stealthy backdoor attack that focuses on the compositional understanding capabilities of LVLMs. Instead of enforcing a fixed targeted content, TokenSwap subtly disrupts the understanding of object relationships in text. Specifically, it causes the backdoored model to generate outputs that mention the correct objects in the image but misrepresent their relationships (i.e., bags-of-words behavior). During training, TokenSwap injects a visual trigger into selected samples and simultaneously swaps the grammatical roles of key tokens in the corresponding textual answers. However, the poisoned samples exhibit only subtle differences from the original ones, making it challenging for the model to learn the backdoor behavior. To address this, TokenSwap employs an adaptive token-weighted loss that explicitly emphasizes the learning of swapped tokens, such that the visual triggers and bags-of-words behavior are associated. Extensive experiments demonstrate that TokenSwap achieves high attack success rates while maintaining superior evasiveness and stealthiness across multiple benchmarks and various LVLM architectures.
Key Contributions
- TokenSwap: a backdoor attack that disrupts compositional understanding in LVLMs by swapping grammatical subject/object roles in poisoned training answers rather than enforcing fixed target strings, achieving superior evasiveness against perplexity-based detectors
- Adaptive token-weighted loss that dynamically up-weights swapped tokens predicted with low confidence, enabling the model to learn the subtle trigger-to-bags-of-words association
- Analysis demonstrating that existing fixed-pattern LVLM backdoors are easily detected by min-k perplexity scoring, motivating the shift to instance-dependent compositional targets
🛡️ Threat Analysis
TokenSwap is a backdoor/trojan attack — it poisons LVLM training data with visual triggers tied to a specific hidden behavior (swapped object relationships) that activates only when the trigger is present, while the model behaves normally on clean inputs. This is a canonical ML10 backdoor with a novel stealthy target behavior.