Robust Defense Strategies for Multimodal Contrastive Learning: Efficient Fine-tuning Against Backdoor Attacks
Md. Iqbal Hossain 1, Afia Sajeeda 1, Neeresh Kumar Perla 1, Ming Shao 2
Published on arXiv
2511.13545
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
The segmentation-oracle-guided fine-tuning strategy effectively removes backdoor effects in CLIP models using only a compact dataset, without requiring retraining from scratch.
The advent of multimodal deep learning models, such as CLIP, has unlocked new frontiers in a wide range of applications, from image-text understanding to classification tasks. However, these models are not safe for adversarial attacks, particularly backdoor attacks, which can subtly manipulate model behavior. Moreover, existing defense methods typically involve training from scratch or fine-tuning using a large dataset without pinpointing the specific labels that are affected. In this study, we introduce an innovative strategy to enhance the robustness of multimodal contrastive learning models against such attacks. In particular, given a poisoned CLIP model, our approach can identify the backdoor trigger and pinpoint the victim samples and labels in an efficient manner. To that end, an image segmentation ``oracle'' is introduced as the supervisor for the output of the poisoned CLIP. We develop two algorithms to rectify the poisoned model: (1) differentiating between CLIP and Oracle's knowledge to identify potential triggers; (2) pinpointing affected labels and victim samples, and curating a compact fine-tuning dataset. With this knowledge, we are allowed to rectify the poisoned CLIP model to negate backdoor effects. Extensive experiments on visual recognition benchmarks demonstrate our strategy is effective in CLIP-based backdoor defense.
Key Contributions
- An image segmentation 'oracle' that supervises poisoned CLIP outputs to distinguish backdoor-influenced knowledge from clean knowledge
- Algorithm to differentiate CLIP vs. oracle predictions to localize potential backdoor triggers
- Algorithm to pinpoint victim labels and poisoned samples, enabling curation of a compact fine-tuning dataset for efficient model rectification
🛡️ Threat Analysis
Primary contribution is a defense against backdoor/trojan attacks on CLIP — proposes methods to identify hidden triggers, pinpoint poisoned samples/labels, and fine-tune the model to eliminate backdoor behavior without retraining from scratch.