defense 2025

Vulnerability-Aware Robust Multimodal Adversarial Training

Junrui Zhang 1, Xinyu Zhao 2, Jie Peng 1, Chenjie Wang 3, Jianmin Ji 1, Tianlong Chen 2

0 citations · 33 references · arXiv

α

Published on arXiv

2511.18138

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

VARMAT achieves 12.73%, 22.21%, and 11.19% robustness improvement over baselines on three multimodal datasets by targeting the most vulnerable modalities during adversarial training.

VARMAT

Novel technique introduced


Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities. However, the interdependencies among modalities increase the susceptibility of multimodal models to adversarial attacks. Existing methods mainly focus on attacks on specific modalities or indiscriminately attack all modalities. In this paper, we find that these approaches ignore the differences between modalities in their contribution to final robustness, resulting in suboptimal robustness performance. To bridge this gap, we introduce Vulnerability-Aware Robust Multimodal Adversarial Training (VARMAT), a probe-in-training adversarial training method that improves multimodal robustness by identifying the vulnerability of each modality. To be specific, VARMAT first explicitly quantifies the vulnerability of each modality, grounded in a first-order approximation of the attack objective (Probe). Then, we propose a targeted regularization term that penalizes modalities with high vulnerability, guiding robust learning while maintaining task accuracy (Training). We demonstrate the enhanced robustness of our method across multiple multimodal datasets involving diverse modalities. Finally, we achieve {12.73%, 22.21%, 11.19%} robustness improvement on three multimodal datasets, revealing a significant blind spot in multimodal adversarial training.


Key Contributions

  • First-order approximation method to explicitly quantify adversarial vulnerability of each modality independently during training (Probe step)
  • Targeted regularization term that penalizes high-vulnerability modalities to guide robustness improvements while preserving clean task accuracy (Training step)
  • Demonstrated robustness gains of 12.73%, 22.21%, and 11.19% across three diverse multimodal datasets (CMU-MOSEI, UR-FUNNY, MIMIC)

🛡️ Threat Analysis

Input Manipulation Attack

Proposes adversarial training as a defense against input manipulation attacks on multimodal models — directly addresses adversarial examples crafted to fool multi-modal classifiers at inference time, using first-order vulnerability probing to guide robust training.


Details

Domains
multimodal
Model Types
multimodaltransformer
Threat Tags
white_boxtraining_timeinference_timedigital
Datasets
CMU-MOSEIUR-FUNNYMIMIC
Applications
multimodal sentiment analysismultimodal classification