attack 2026

Multi-Turn Adaptive Prompting Attack on Large Vision-Language Models

In Chong Choi 1, Jiacheng Zhang 1, Feng Liu 1, Yiliao Song 2

0 citations · 42 references · arXiv (Cornell University)

α

Published on arXiv

2602.14399

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Improves attack success rates by 11–35% over state-of-the-art methods against LLaVA-V1.6-Mistral-7B, Qwen2.5-VL-7B-Instruct, Llama-3.2-Vision-11B-Instruct, and GPT-4o-mini.

MAPA

Novel technique introduced


Multi-turn jailbreak attacks are effective against text-only large language models (LLMs) by gradually introducing malicious content across turns. When extended to large vision-language models (LVLMs), we find that naively adding visual inputs can cause existing multi-turn jailbreaks to be easily defended. For example, overly malicious visual input will easily trigger the defense mechanism of safety-aligned LVLMs, making the response more conservative. To address this, we propose MAPA: a multi-turn adaptive prompting attack that 1) at each turn, alternates text-vision attack actions to elicit the most malicious response; and 2) across turns, adjusts the attack trajectory through iterative back-and-forth refinement to gradually amplify response maliciousness. This two-level design enables MAPA to consistently outperform state-of-the-art methods, improving attack success rates by 11-35% on recent benchmarks against LLaVA-V1.6-Mistral-7B, Qwen2.5-VL-7B-Instruct, Llama-3.2-Vision-11B-Instruct and GPT-4o-mini.


Key Contributions

  • First multi-turn jailbreak attack designed specifically for LVLMs, identifying that naive visual input addition triggers safety defenses
  • Turn-level adaptive action selection that alternates text and vision inputs to find the least-defended attack action per turn
  • Cross-turn trajectory refinement using a semantic maliciousness score to guide advance, regenerate, or rollback decisions across turns

🛡️ Threat Analysis


Details

Domains
multimodalnlp
Model Types
vlmllm
Threat Tags
black_boxinference_timetargeted
Datasets
JailBreakBenchHarmBench
Applications
vision-language model safetychatbot safety alignment