attack 2025

Enhancing Adversarial Transferability in Visual-Language Pre-training Models via Local Shuffle and Sample-based Attack

Xin Liu , Aoyang Zhou , Aoyang Zhou

0 citations · 41 references · NAACL

α

Published on arXiv

2511.00831

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

LSSA significantly enhances multimodal adversarial transferability across diverse VLP architectures and outperforms advanced attacks on Large Vision-Language Models

LSSA (Local Shuffle and Sample-based Attack)

Novel technique introduced


Visual-Language Pre-training (VLP) models have achieved significant performance across various downstream tasks. However, they remain vulnerable to adversarial examples. While prior efforts focus on improving the adversarial transferability of multimodal adversarial examples through cross-modal interactions, these approaches suffer from overfitting issues, due to a lack of input diversity by relying excessively on information from adversarial examples in one modality when crafting attacks in another. To address this issue, we draw inspiration from strategies in some adversarial training methods and propose a novel attack called Local Shuffle and Sample-based Attack (LSSA). LSSA randomly shuffles one of the local image blocks, thus expanding the original image-text pairs, generating adversarial images, and sampling around them. Then, it utilizes both the original and sampled images to generate the adversarial texts. Extensive experiments on multiple models and datasets demonstrate that LSSA significantly enhances the transferability of multimodal adversarial examples across diverse VLP models and downstream tasks. Moreover, LSSA outperforms other advanced attacks on Large Vision-Language Models.


Key Contributions

  • Local Shuffle and Sample-based Attack (LSSA) that randomly shuffles local image blocks to expand input diversity and reduce overfitting in cross-modal adversarial attack generation
  • Sampling-based strategy that perturbs around generated adversarial images and leverages both original and sampled inputs when crafting adversarial texts, improving cross-modal transferability
  • Demonstrated superior adversarial transferability across multiple VLP models and Large Vision-Language Models, outperforming prior advanced attacks

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is crafting adversarial examples — perturbed images and texts via gradient-based optimization — that cause misclassification/task failure at inference time on VLP models. The paper specifically improves adversarial transferability (black-box attack effectiveness) across diverse VLP architectures using input diversity techniques.


Details

Domains
visionnlpmultimodal
Model Types
vlmtransformer
Threat Tags
white_boxblack_boxinference_timedigital
Applications
image-text retrievalvisual question answeringvisual language pre-training models