survey 2026

Adversarial Attacks on Multimodal Large Language Models: A Comprehensive Survey

Bhavuk Jain 1, Sercan Ö. Arık 1, Hardeo K. Thakur 2

0 citations · Transactions on Machine Learni...

α

Published on arXiv

2603.27918

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Identifies cross-modal vulnerabilities as key expanded attack surface in MLLMs, arising from complex interplay between modality fusion mechanisms and shared embedding spaces


Multimodal large language models (MLLMs) integrate information from multiple modalities such as text, images, audio, and video, enabling complex capabilities such as visual question answering and audio translation. While powerful, this increased expressiveness introduces new and amplified vulnerabilities to adversarial manipulation. This survey provides a comprehensive and systematic analysis of adversarial threats to MLLMs, moving beyond enumerating attack techniques to explain the underlying causes of model susceptibility. We introduce a taxonomy that organizes adversarial attacks according to attacker objectives, unifying diverse attack surfaces across modalities and deployment settings. Additionally, we also present a vulnerability-centric analysis that links integrity attacks, safety and jailbreak failures, control and instruction hijacking, and training-time poisoning to shared architectural and representational weaknesses in multimodal systems. Together, this framework provides an explanatory foundation for understanding adversarial behavior in MLLMs and informs the development of more robust and secure multimodal language systems.


Key Contributions

  • Introduces goal-driven taxonomy organizing MLLM adversarial attacks by attacker objectives across modalities
  • Provides vulnerability-centric analysis linking attack families to shared architectural and representational weaknesses
  • Systematically covers cross-modal prompt injection, fusion mechanism attacks, adversarial illusions, and training-time poisoning

🛡️ Threat Analysis

Input Manipulation Attack

Survey covers adversarial perturbations and adversarial illusions designed to cause misclassification or erroneous outputs at inference time across multimodal inputs.


Details

Domains
multimodalnlpvisionaudio
Model Types
llmvlmmultimodaltransformer
Threat Tags
white_boxgrey_boxblack_boxtraining_timeinference_time
Applications
visual question answeringimage captioningaudio-visual scene analysisrobotics