defense 2025

TTP: Test-Time Padding for Adversarial Detection and Robust Adaptation on Vision-Language Models

Zhiwei Li 1,2, Yitian Pang 3, Weining Wang 2, Zhenan Sun 1,2, Qi Li 1,2

0 citations · 54 references · arXiv

α

Published on arXiv

2512.16523

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

TTP consistently surpasses state-of-the-art test-time defenses across diverse CLIP backbones and fine-grained benchmarks, substantially improving adversarial robustness without degrading clean accuracy

Test-Time Padding (TTP)

Novel technique introduced


Vision-Language Models (VLMs), such as CLIP, have achieved impressive zero-shot recognition performance but remain highly susceptible to adversarial perturbations, posing significant risks in safety-critical scenarios. Previous training-time defenses rely on adversarial fine-tuning, which requires labeled data and costly retraining, while existing test-time strategies fail to reliably distinguish between clean and adversarial inputs, thereby preventing both adversarial robustness and clean accuracy from reaching their optimum. To address these limitations, we propose Test-Time Padding (TTP), a lightweight defense framework that performs adversarial detection followed by targeted adaptation at inference. TTP identifies adversarial inputs via the cosine similarity shift between CLIP feature embeddings computed before and after spatial padding, yielding a universal threshold for reliable detection across architectures and datasets. For detected adversarial cases, TTP employs trainable padding to restore disrupted attention patterns, coupled with a similarity-aware ensemble strategy for a more robust final prediction. For clean inputs, TTP leaves them unchanged by default or optionally integrates existing test-time adaptation techniques for further accuracy gains. Comprehensive experiments on diverse CLIP backbones and fine-grained benchmarks show that TTP consistently surpasses state-of-the-art test-time defenses, delivering substantial improvements in adversarial robustness without compromising clean accuracy. The code for this paper will be released soon.


Key Contributions

  • Universal adversarial detection via cosine similarity shift between CLIP embeddings computed before and after spatial padding, yielding a generalizable threshold across architectures and datasets
  • Trainable test-time padding that dynamically optimizes padding parameters via entropy minimization to restore attention patterns disrupted by adversarial perturbations
  • Similarity-aware ensemble strategy that aggregates high-confidence augmented views to produce robust final predictions while leaving clean inputs unmodified

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial examples (gradient-based input perturbations such as PGD and FGSM) targeting CLIP VLMs at inference time. The core contribution is a lightweight adversarial detection mechanism followed by targeted adaptation to restore robustness — directly countering input manipulation attacks.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxblack_boxinference_timedigital
Applications
zero-shot image recognitionimage classification