attack 2025

Backdoor Attacks on Open Vocabulary Object Detectors via Multi-Modal Prompt Tuning

Ankita Raj , Chetan Arora

0 citations · 47 references · arXiv

α

Published on arXiv

2511.12735

Model Poisoning

OWASP ML Top 10 — ML10

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

TrAP achieves high attack success rates for both object misclassification and object disappearance attacks across multiple datasets while improving clean mAP over zero-shot baselines.

TrAP (Trigger-Aware Prompt tuning)

Novel technique introduced


Open-vocabulary object detectors (OVODs) unify vision and language to detect arbitrary object categories based on text prompts, enabling strong zero-shot generalization to novel concepts. As these models gain traction in high-stakes applications such as robotics, autonomous driving, and surveillance, understanding their security risks becomes crucial. In this work, we conduct the first study of backdoor attacks on OVODs and reveal a new attack surface introduced by prompt tuning. We propose TrAP (Trigger-Aware Prompt tuning), a multi-modal backdoor injection strategy that jointly optimizes prompt parameters in both image and text modalities along with visual triggers. TrAP enables the attacker to implant malicious behavior using lightweight, learnable prompt tokens without retraining the base model weights, thus preserving generalization while embedding a hidden backdoor. We adopt a curriculum-based training strategy that progressively shrinks the trigger size, enabling effective backdoor activation using small trigger patches at inference. Experiments across multiple datasets show that TrAP achieves high attack success rates for both object misclassification and object disappearance attacks, while also improving clean image performance on downstream datasets compared to the zero-shot setting. Code: https://github.com/rajankita/TrAP


Key Contributions

  • First study of backdoor attacks on open-vocabulary object detectors (OVODs), identifying prompt tuning as a novel attack surface.
  • TrAP: a multi-modal backdoor injection method that jointly optimizes visual triggers and learnable prompt tokens in both image and text branches without retraining model weights.
  • Curriculum-based training strategy that progressively shrinks trigger size, enabling effective backdoor activation with small patches at inference while maintaining clean-data performance.

🛡️ Threat Analysis

Transfer Learning Attack

The attack exploits prompt tuning (analogous to adapter/LoRA tuning) as the injection vector, framing learnable prompt tokens as a novel attack surface in the transfer learning pipeline — a direct match for adapter/prompt trojan attacks.

Model Poisoning

TrAP embeds hidden trigger-activated backdoor behavior (object misclassification and object disappearance) into OVODs — the defining characteristic of a trojan/backdoor attack with specific trigger patterns.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxtraining_timetargeteddigital
Datasets
COCO
Applications
object detectionautonomous drivingroboticssurveillance