defense 2026

Finetune Like You Pretrain: Boosting Zero-shot Adversarial Robustness in Vision-language Models

Songlong Xing 1, Weijie Wang 1,2, Zhengyu Zhao 3, Jindong Gu 4, Philip Torr 4, Nicu Sebe 1

0 citations

α

Published on arXiv

2604.11576

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves superior adversarial robustness and clean accuracy across 14 downstream datasets while preserving zero-shot transfer capabilities better than mainstream adversarial finetuning approaches

AdvFLYP

Novel technique introduced


Despite their impressive zero-shot abilities, vision-language models such as CLIP have been shown to be susceptible to adversarial attacks. To enhance its adversarial robustness, recent studies finetune the pretrained vision encoder of CLIP with adversarial examples on a proxy dataset such as ImageNet by aligning adversarial images with correct class labels. However, these methods overlook the important roles of training data distributions and learning objectives, resulting in reduced zero-shot capabilities and limited transferability of robustness across domains and datasets. In this work, we propose a simple yet effective paradigm AdvFLYP, which follows the training recipe of CLIP's pretraining process when performing adversarial finetuning to the model. Specifically, AdvFLYP finetunes CLIP with adversarial images created based on image-text pairs collected from the web, and match them with their corresponding texts via a contrastive loss. To alleviate distortion of adversarial image embeddings of noisy web images, we further propose to regularise AdvFLYP by penalising deviation of adversarial image features. We show that logit- and feature-level regularisation terms benefit robustness and clean accuracy, respectively. Extensive experiments on 14 downstream datasets spanning various domains show the superiority of our paradigm over mainstream practices. Our code and model weights are released at https://github.com/Sxing2/AdvFLYP.


Key Contributions

  • AdvFLYP paradigm that follows CLIP's pretraining recipe for adversarial finetuning using web-scraped image-text pairs and contrastive loss
  • Logit-level and feature-level regularization terms that improve robustness and clean accuracy respectively
  • Demonstrates superior zero-shot robustness transferability across 14 downstream datasets compared to existing adversarial finetuning methods

🛡️ Threat Analysis

Input Manipulation Attack

Paper addresses adversarial robustness of vision-language models against adversarial attacks at inference time. The defense method (AdvFLYP) finetunes CLIP with adversarial examples to improve resistance to input manipulation attacks while maintaining zero-shot transfer capabilities.


Details

Domains
visionnlpmultimodal
Model Types
vlmtransformermultimodal
Threat Tags
inference_timedigital
Datasets
ImageNet14 downstream datasets spanning various domains
Applications
zero-shot image classificationvision-language understanding