defense 2026

What Makes VLMs Robust? Towards Reconciling Robustness and Accuracy in Vision-Language Models

Sen Nie 1,2, Jie Zhang 1,2, Zhongqi Wang 1,2, Zhaoyang Wei 2, Shiguang Shan 1,2, Xilin Chen 1,2

0 citations

α

Published on arXiv

2603.12799

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves state-of-the-art robustness-accuracy trade-off on 18 datasets and generalizes to large VLMs like LLaVA and Qwen-VL

R-Adapt

Novel technique introduced


Achieving adversarial robustness in Vision-Language Models (VLMs) inevitably compromises accuracy on clean data, presenting a long-standing and challenging trade-off. In this work, we revisit this trade-off by investigating a fundamental question: What makes VLMs robust? Through a detailed analysis of adversarially fine-tuned models, we examine how robustness mechanisms function internally and how they interact with clean accuracy. Our analysis reveals that adversarial robustness is not uniformly distributed across network depth. Instead, unexpectedly, it is primarily localized within the shallow layers, driven by a low-frequency spectral bias and input-insensitive attention patterns. Meanwhile, updates to the deep layers tend to undermine both clean accuracy and robust generalization. Motivated by these insights, we propose Adversarial Robustness Adaptation (R-Adapt), a simple yet effective framework that freezes all pre-trained weights and introduces minimal, insight-driven adaptations only in the initial layers. This design achieves an exceptional balance between adversarial robustness and clean accuracy. R-Adapt further supports training-free, model-guided, and data-driven paradigms, offering flexible pathways to seamlessly equip standard models with robustness. Extensive evaluations on 18 datasets and diverse tasks demonstrate our state-of-the-art performance under various attacks. Notably, R-Adapt generalizes efficiently to large vision-language models (e.g., LLaVA and Qwen-VL) to enhance their robustness. Our project page is available at https://summu77.github.io/R-Adapt.


Key Contributions

  • Discovers that adversarial robustness in VLMs is localized in shallow layers via low-frequency spectral bias and input-insensitive attention
  • Proposes R-Adapt framework that freezes pre-trained weights and adapts only initial layers to balance robustness and clean accuracy
  • Supports training-free, model-guided, and data-driven paradigms for flexible robustness enhancement

🛡️ Threat Analysis

Input Manipulation Attack

Defends vision-language models against adversarial perturbation attacks at inference time. The paper analyzes adversarially fine-tuned models and proposes R-Adapt to reconcile robustness against adversarial examples with clean accuracy.


Details

Domains
visionnlpmultimodal
Model Types
vlmmultimodaltransformer
Threat Tags
inference_timedigital
Datasets
18 datasets mentioned (specific names not in abstract/title)
Applications
vision-language understandingmultimodal classificationimage-text retrieval