attack 2026

Multi-Paradigm Collaborative Adversarial Attack Against Multi-Modal Large Language Models

Yuanbo Li 1, Tianyang Xu 1, Cong Hu 1, Tao Zhou 1, Xiao-Jun Wu 1, Josef Kittler 2

0 citations

α

Published on arXiv

2603.04846

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MPCAttack consistently outperforms state-of-the-art transferable adversarial attack methods in both targeted and untargeted settings across diverse open-source and closed-source MLLMs.

MPCAttack

Novel technique introduced


The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also exposes serious transferable adversarial vulnerabilities. In general, existing adversarial attacks against MLLMs typically rely on surrogate models trained within a single learning paradigm and perform independent optimisation in their respective feature spaces. This straightforward setting naturally restricts the richness of feature representations, delivering limits on the search space and thus impeding the diversity of adversarial perturbations. To address this, we propose a novel Multi-Paradigm Collaborative Attack (MPCAttack) framework to boost the transferability of adversarial examples against MLLMs. In principle, MPCAttack aggregates semantic representations, from both visual images and language texts, to facilitate joint adversarial optimisation on the aggregated features through a Multi-Paradigm Collaborative Optimisation (MPCO) strategy. By performing contrastive matching on multi-paradigm features, MPCO adaptively balances the importance of different paradigm representations and guides the global perturbation optimisation, effectively alleviating the representation bias. Extensive experimental results on multiple benchmarks demonstrate the superiority of MPCAttack, indicating that our solution consistently outperforms state-of-the-art methods in both targeted and untargeted attacks on open-source and closed-source MLLMs. The code is released at https://github.com/LiYuanBoJNU/MPCAttack.


Key Contributions

  • MPCAttack framework that aggregates features from multiple learning paradigms (cross-modal alignment, multi-modal understanding, visual self-supervised) to craft more diverse and transferable adversarial perturbations against MLLMs.
  • Multi-Paradigm Collaborative Optimisation (MPCO) strategy that uses contrastive matching to adaptively balance paradigm representations and guide global perturbation optimisation, alleviating single-paradigm representation bias.
  • Empirical demonstration of SOTA attack success rates in both targeted and untargeted settings against open-source and closed-source MLLMs.

🛡️ Threat Analysis

Input Manipulation Attack

MPCAttack crafts gradient-based adversarial perturbations on visual inputs using white-box surrogate models to cause incorrect outputs at inference time in target models — a canonical adversarial example / evasion attack.


Details

Domains
visionnlpmultimodal
Model Types
vlmmultimodalllmtransformer
Threat Tags
white_boxblack_boxinference_timetargeteduntargeteddigital
Applications
multi-modal large language modelsvisual question answeringimage captioning