defense 2026

Complementary Text-Guided Attention for Zero-Shot Adversarial Robustness

Lu Yu 1, Haiyang Zhang 1, Changsheng Xu 2,3

0 citations

α

Published on arXiv

2603.18598

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Comp-TGA achieves 11.95% improvement in zero-shot robust accuracy over state-of-the-art techniques across 16 datasets

Comp-TGA

Novel technique introduced


Due to the impressive zero-shot capabilities, pre-trained vision-language models (e.g., CLIP), have attracted widespread attention and adoption across various domains. Nonetheless, CLIP has been observed to be susceptible to adversarial examples. Through experimental analysis, we have observed a phenomenon wherein adversarial perturbations induce shifts in text-guided attention. Building upon this observation, we propose a simple yet effective strategy: Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR). This framework incorporates two components: Local Attention Refinement Module and Global Attention Constraint Module. Our goal is to maintain the generalization of the CLIP model and enhance its adversarial robustness. Additionally, the Global Attention Constraint Module acquires text-guided attention from both the target and original models using clean examples. Its objective is to maintain model performance on clean samples while enhancing overall robustness. However, we observe that the method occasionally focuses on irrelevant or spurious features, which can lead to suboptimal performance and undermine its robustness in certain scenarios. To overcome this limitation, we further propose a novel approach called Complementary Text-Guided Attention (Comp-TGA). This method integrates two types of foreground attention: attention guided by the class prompt and reversed attention driven by the non-class prompt. These complementary attention mechanisms allow the model to capture a more comprehensive and accurate representation of the foreground. The experiments validate that TGA-ZSR and Comp-TGA yield 9.58% and 11.95% improvements respectively, in zero-shot robust accuracy over the current state-of-the-art techniques across 16 datasets.


Key Contributions

  • TGA-ZSR framework with Local Attention Refinement and Global Attention Constraint modules to align text-guided attention between adversarial and clean examples
  • Complementary Text-Guided Attention (Comp-TGA) method integrating class and non-class prompt attention to avoid spurious features
  • 9.58% and 11.95% improvements in zero-shot robust accuracy over SOTA across 16 datasets

🛡️ Threat Analysis

Input Manipulation Attack

Paper addresses adversarial examples that cause misclassification of vision-language models at inference time. Proposes TGA-ZSR and Comp-TGA defense methods to improve robustness against adversarial perturbations while maintaining zero-shot capabilities.


Details

Domains
visionnlpmultimodal
Model Types
multimodaltransformervlm
Threat Tags
inference_timedigital
Datasets
16 datasets mentioned but not specifically named in abstract/excerpt
Applications
image classificationzero-shot learning