attack 2025

Adversarial Confusion Attack: Disrupting Multimodal Large Language Models

Jakub Hoscilowicz , Artur Janicki

1 citations · 20 references · arXiv

α

Published on arXiv

2511.20494

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A single PGD-optimized adversarial image maximizing next-token entropy across a 4-model ensemble successfully transfers to proprietary models including GPT-5.1 and GPT-o3, causing confident hallucinations or incoherent outputs.

Adversarial Confusion Attack

Novel technique introduced


We introduce the Adversarial Confusion Attack, a new class of threats against multimodal large language models (MLLMs). Unlike jailbreaks or targeted misclassification, the goal is to induce systematic disruption that makes the model generate incoherent or confidently incorrect outputs. Practical applications include embedding such adversarial images into websites to prevent MLLM-powered AI Agents from operating reliably. The proposed attack maximizes next-token entropy using a small ensemble of open-source MLLMs. In the white-box setting, we show that a single adversarial image can disrupt all models in the ensemble, both in the full-image and Adversarial CAPTCHA settings. Despite relying on a basic adversarial technique (PGD), the attack generates perturbations that transfer to both unseen open-source (e.g., Qwen3-VL) and proprietary (e.g., GPT-5.1) models.


Key Contributions

  • Introduces the Adversarial Confusion Attack, a novel threat that maximizes next-token Shannon entropy via PGD to systematically destabilize MLLM decoding rather than steer outputs toward specific targets.
  • Demonstrates white-box disruption of an open-source ensemble in both full-image and Adversarial CAPTCHA settings with a single adversarial image.
  • Shows black-box transfer to unseen open-source (Qwen3-VL) and proprietary (GPT-5.1, GPT-o3, Gemini 3.0) MLLMs, with five characterized failure modes.

🛡️ Threat Analysis

Input Manipulation Attack

Uses PGD gradient-based optimization to craft adversarial visual perturbations that manipulate VLM outputs at inference time — a direct adversarial example attack on multimodal models in both full-image and patch (Adversarial CAPTCHA) settings.


Details

Domains
visionmultimodalnlp
Model Types
vlmllmtransformer
Threat Tags
white_boxblack_boxinference_timedigital
Datasets
LMSYS ArenaCCRU homepage screenshots
Applications
multimodal llmsmllm-powered ai agentsweb-browsing ai agents