Semantic-aware Adversarial Fine-tuning for CLIP
Jiacheng Zhang , Jinhao Li , Hanxun Huang , Sarah M. Erfani , Benjamin I.P. Rubinstein , Feng Liu
Published on arXiv
2602.12461
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
SAFT achieves substantial improvements in zero-shot adversarial robustness across 16 datasets compared to prior adversarial fine-tuning methods for CLIP.
SAFT (Semantic-aware Adversarial Fine-Tuning)
Novel technique introduced
Recent studies have shown that CLIP model's adversarial robustness in zero-shot classification tasks can be enhanced by adversarially fine-tuning its image encoder with adversarial examples (AEs), which are generated by minimizing the cosine similarity between images and a hand-crafted template (e.g., ''A photo of a {label}''). However, it has been shown that the cosine similarity between a single image and a single hand-crafted template is insufficient to measure the similarity for image-text pairs. Building on this, in this paper, we find that the AEs generated using cosine similarity may fail to fool CLIP when the similarity metric is replaced with semantically enriched alternatives, making the image encoder fine-tuned with these AEs less robust. To overcome this issue, we first propose a semantic-ensemble attack to generate semantic-aware AEs by minimizing the average similarity between the original image and an ensemble of refined textual descriptions. These descriptions are initially generated by a foundation model to capture core semantic features beyond hand-crafted templates and are then refined to reduce hallucinations. To this end, we propose Semantic-aware Adversarial Fine-Tuning (SAFT), which fine-tunes CLIP's image encoder with semantic-aware AEs. Extensive experiments show that SAFT outperforms current methods, achieving substantial improvements in zero-shot adversarial robustness across 16 datasets. Our code is available at: https://github.com/tmlr-group/SAFT.
Key Contributions
- Identifies that AEs optimized against the CLIP cosine score fail to fool CLIP under semantically enriched similarity metrics (CuPL, WCA), exposing a weakness in prior adversarial fine-tuning methods.
- Proposes a semantic-ensemble attack that minimizes image similarity against an ensemble of LLM-generated, hallucination-filtered class descriptions, producing more universally effective adversarial examples.
- Introduces SAFT (Semantic-aware Adversarial Fine-Tuning), which trains CLIP's image encoder on these semantic-aware AEs, outperforming prior AT methods across 16 zero-shot classification datasets.
🛡️ Threat Analysis
Proposes a new gradient-based attack (semantic-ensemble attack using PGD) to generate stronger adversarial examples for CLIP, and defends against them via adversarial fine-tuning (SAFT). Both the attack and defense directly concern input manipulation of a vision-language model at inference time.