defense 2026

BackdoorIDS: Zero-shot Backdoor Detection for Pretrained Vision Encoder

Siquan Huang 1, Yijiang Li 2, Ningzhi Gao 1, Xingfu Yan 3, Leyu Shi 1

0 citations

α

Published on arXiv

2603.11664

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BackdoorIDS consistently outperforms existing backdoor defenses across diverse attack types, datasets, and model families in a fully zero-shot, plug-and-play manner at inference time.

BackdoorIDS

Novel technique introduced


Self-supervised and multimodal vision encoders learn strong visual representations that are widely adopted in downstream vision tasks and large vision-language models (LVLMs). However, downstream users often rely on third-party pretrained encoders with uncertain provenance, exposing them to backdoor attacks. In this work, we propose BackdoorIDS, a simple yet effective zero-shot, inference-time backdoor samples detection method for pretrained vision encoders. BackdoorIDS is motivated by two observations: Attention Hijacking and Restoration. Under progressive input masking, a backdoored image initially concentrates attention on malicious trigger features. Once the masking ratio exceeds the trigger's robustness threshold, the trigger is deactivated, and attention rapidly shifts to benign content. This transition induces a pronounced change in the image embedding, whereas embeddings of clean images evolve more smoothly across masking progress. BackdoorIDS operationalizes this signal by extracting an embedding sequence along the masking trajectory and applying density-based clustering such as DBSCAN. An input is flagged as backdoored if its embedding sequence forms more than one cluster. Extensive experiments show that BackdoorIDS consistently outperforms existing defenses across diverse attack types, datasets, and model families. Notably, it is a plug-and-play approach that requires no retraining and operates fully zero-shot at inference time, making it compatible with a wide range of encoder architectures, including CNNs, ViTs, CLIP, and LLaVA-1.5.


Key Contributions

  • Identifies and formalizes the 'Attention Hijacking and Restoration' phenomenon: backdoored images show abrupt embedding shifts once progressive masking deactivates the trigger, while clean images evolve smoothly.
  • Proposes BackdoorIDS, a plug-and-play zero-shot backdoor sample detector that extracts embedding sequences along a masking trajectory and applies DBSCAN clustering to flag backdoored inputs.
  • Demonstrates compatibility with diverse architectures (CNNs, ViTs, CLIP, LLaVA-1.5) with no retraining required, outperforming existing defenses across attack types and datasets.

🛡️ Threat Analysis

Model Poisoning

BackdoorIDS is a defense against backdoor/trojan attacks on pretrained vision encoders. It detects backdoor-triggered inputs at inference time by exploiting the 'Attention Hijacking and Restoration' phenomenon, where backdoored inputs show abrupt embedding shifts under progressive masking — a direct ML10 defense.


Details

Domains
visionmultimodal
Model Types
cnntransformervlm
Threat Tags
training_timeinference_timetargeteddigital
Applications
image classificationvision-language modelsvisual representation learning