Proxy Robustness in Vision Language Models is Effortlessly Transferable
Xiaowei Fu 1,2, Fuxiang Huang 1,2, Lei Zhang 1
Published on arXiv
2601.12865
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
HPT-GPD enhances zero-shot adversarial robustness of CLIP across 15 downstream datasets while preventing the natural generalization degradation (e.g., from 61% back toward 75% on Food101) caused by naive proxy distillation.
HPT-GPD (Heterogeneous Proxy Transfer with Generalization-Pivot Decoupling)
Novel technique introduced
As a pivotal technique for improving the defense of deep models, adversarial robustness transfer via distillation has demonstrated remarkable success in conventional image classification tasks. However, this paradigm encounters critical challenges when applied to vision-language models (VLM) (e.g., CLIP): constructing adversarially robust teacher for large-scale multi-modal models demands prohibitively high computational resources. We bridge this gap by revealing an interesting phenomenon: vanilla CLIP (without adversarial training) exhibits intrinsic defensive capabilities against adversarial examples generated by another CLIP with different architectures. We formally define this as proxy adversarial robustness, and naturally propose a Heterogeneous Proxy Transfer (HPT) framework that establishes cross-architectural robustness distillation channels between CLIP variants, effortlessly enabling the VLM robustness transfer from proxy to target models. Yet, such proxy transfer paradigm easily induces severe overfitting, leading to a sharp degradation in zero-shot natural generalization. To resolve that, we design Generalization-Pivot Decoupling (GPD) by leveraging the difference in learning rate scheduling. This decouples the proxy transfer process into a generalization-anchored warm-up that maintains generalization and a generalization-pulled HPT that promotes adversarial robustness, to achieve an equilibrium between natural generalization and adversarial robustness. Extensive experiments on 15 zero-shot datasets demonstrate the effectiveness of our HPT-GPD method. The code is available at the website of github.com/fxw13/HPT-GPD.
Key Contributions
- Discovers 'proxy adversarial robustness': vanilla CLIP without adversarial training inherently resists adversarial examples generated by heterogeneous CLIP architectures
- Proposes Heterogeneous Proxy Transfer (HPT) framework that distills cross-architecture robustness from a proxy CLIP to a target CLIP without requiring a costly adversarially-trained teacher
- Designs Generalization-Pivot Decoupling (GPD) via differential learning rate scheduling to maintain zero-shot natural generalization while transferring adversarial robustness
🛡️ Threat Analysis
The paper's primary contribution is a defense against adversarial examples targeting VLMs (CLIP). It leverages proxy adversarial robustness and distillation to improve zero-shot adversarial robustness against PGD-generated perturbations at inference time.