defense 2025

Enhancing CLIP Robustness via Cross-Modality Alignment

Xingyu Zhu 1,2, Beier Zhu 2, Shuo Wang 1, Kesen Zhao 2, Hanwang Zhang 2

6 citations · 68 references · arXiv

α

Published on arXiv

2510.24038

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

COLA achieves an average 6.7% accuracy improvement on ImageNet and its variants under PGD adversarial attacks while maintaining high clean accuracy, training-free.

COLA (Cross-modality Alignment)

Novel technique introduced


Vision-language models (VLMs) such as CLIP demonstrate strong generalization in zero-shot classification but remain highly vulnerable to adversarial perturbations. Existing methods primarily focus on adversarial fine-tuning or prompt optimization; they often overlook the gaps in CLIP's encoded features, which is shown as the text and image features lie far apart from each other. This misalignment is significantly amplified under adversarial perturbations, leading to severe degradation in classification performance. To address this problem, we propose Cross-modality Alignment, dubbed COLA, an optimal transport-based framework that explicitly addresses adversarial misalignment by restoring both global image-text alignment and local structural consistency in the feature space. (1) COLA first projects adversarial image embeddings onto a subspace spanned by class text features, effectively filtering out non-semantic distortions while preserving discriminative information. (2) It then models images and texts as discrete distributions over multiple augmented views and refines their alignment via OT, with the subspace projection seamlessly integrated into the cost computation. This design ensures stable cross-modal alignment even under adversarial conditions. COLA is training-free and compatible with existing fine-tuned models. Extensive evaluations across 14 zero-shot classification benchmarks demonstrate the effectiveness of COLA, especially with an average improvement of 6.7% on ImageNet and its variants under PGD adversarial attacks, while maintaining high accuracy on clean samples.


Key Contributions

  • Identifies adversarial misalignment between image and text feature spaces in CLIP as a root cause of adversarial vulnerability
  • Proposes COLA, a training-free optimal transport framework that projects adversarial image embeddings onto a text-feature subspace to filter non-semantic distortions
  • Achieves 6.7% average accuracy improvement on ImageNet and variants under PGD attacks across 14 zero-shot classification benchmarks without modifying model weights

🛡️ Threat Analysis

Input Manipulation Attack

Directly defends against adversarial input perturbations (PGD attacks) on CLIP vision-language models at inference time by restoring image-text feature alignment in the embedding space.


Details

Domains
visionmultimodalnlp
Model Types
vlmtransformer
Threat Tags
white_boxinference_timedigital
Datasets
ImageNetImageNet-V2ImageNet-SketchImageNet-AImageNet-R
Applications
zero-shot image classificationvision-language models