survey arXiv Aug 7, 2025 · Aug 2025
Zane Xu, Jason Sun · San Francisco State University · Park University
Surveys eight defenses against adversarial attacks on CLIP-like VLMs, covering fine-tuning and test-time paradigms for zero-shot robustness
Input Manipulation Attack visionmultimodal
This report synthesizes eight seminal papers on the zero-shot adversarial robustness of vision-language models (VLMs) like CLIP. A central challenge in this domain is the inherent trade-off between enhancing adversarial robustness and preserving the model's zero-shot generalization capabilities. We analyze two primary defense paradigms: Adversarial Fine-Tuning (AFT), which modifies model parameters, and Training-Free/Test-Time Defenses, which preserve them. We trace the evolution from alignment-preserving methods (TeCoA) to embedding space re-engineering (LAAT, TIMA), and from input heuristics (AOM, TTC) to latent-space purification (CLIPure). Finally, we identify key challenges and future directions including hybrid defense strategies and adversarial pre-training.
vlm transformer San Francisco State University · Park University