attack 2025

Improving Generalizability and Undetectability for Targeted Adversarial Attacks on Multimodal Pre-trained Models

Zhifang Zhang 1, Jiahan Zhang 2, Shengjie Zhou 3, Qi Wei 4, Shuo He 4, Feng Liu 5, Lei Feng 1

2 citations · 64 references · arXiv

α

Published on arXiv

2509.19994

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

PTA achieves high targeted attack success rate across semantically similar and partially known targets while remaining undetectable against multiple anomaly detection methods on multimodal pre-trained models like ImageBind.

Proxy Targeted Attack (PTA)

Novel technique introduced


Multimodal pre-trained models (e.g., ImageBind), which align distinct data modalities into a shared embedding space, have shown remarkable success across downstream tasks. However, their increasing adoption raises serious security concerns, especially regarding targeted adversarial attacks. In this paper, we show that existing targeted adversarial attacks on multimodal pre-trained models still have limitations in two aspects: generalizability and undetectability. Specifically, the crafted targeted adversarial examples (AEs) exhibit limited generalization to partially known or semantically similar targets in cross-modal alignment tasks (i.e., limited generalizability) and can be easily detected by simple anomaly detection methods (i.e., limited undetectability). To address these limitations, we propose a novel method called Proxy Targeted Attack (PTA), which leverages multiple source-modal and target-modal proxies to optimize targeted AEs, ensuring they remain evasive to defenses while aligning with multiple potential targets. We also provide theoretical analyses to highlight the relationship between generalizability and undetectability and to ensure optimal generalizability while meeting the specified requirements for undetectability. Furthermore, experimental results demonstrate that our PTA can achieve a high success rate across various related targets and remain undetectable against multiple anomaly detection methods.


Key Contributions

  • Identifies two critical limitations of existing targeted adversarial attacks on multimodal pre-trained models: poor generalizability to unseen/similar targets and vulnerability to simple anomaly detection.
  • Proposes Proxy Targeted Attack (PTA), which uses both source-modal and target-modal proxies to jointly optimize adversarial examples for generalizability and undetectability.
  • Provides theoretical analysis formalizing the relationship between generalizability and undetectability, and demonstrates experimentally that PTA evades multiple anomaly detectors while achieving high success across semantically related targets.

🛡️ Threat Analysis

Input Manipulation Attack

Directly proposes a new targeted adversarial example crafting method (PTA) that manipulates multimodal pre-trained model outputs at inference time via gradient-based perturbations, evading anomaly detectors while generalizing to semantically similar targets in cross-modal retrieval and classification tasks.


Details

Domains
visionmultimodalnlp
Model Types
multimodaltransformer
Threat Tags
white_boxgrey_boxinference_timetargeteddigital
Applications
cross-modal retrievalimage classificationmultimodal alignment