survey 2026

Jailbreaking LLMs & VLMs: Mechanisms, Evaluation, and Unified Defense

Zejian Chen 1, Chao Li 1, Xi Zhang 1, Litian Zhang 1, He YiMin 2

1 citations · 137 references · arXiv

α

Published on arXiv

2601.03594

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Proposes a unified three-layer defense framework (perception, generation, parameter) consolidating shared jailbreak defense mechanisms across LLMs and VLMs in both text-only and multimodal settings.


This paper provides a systematic survey of jailbreak attacks and defenses on Large Language Models (LLMs) and Vision-Language Models (VLMs), emphasizing that jailbreak vulnerabilities stem from structural factors such as incomplete training data, linguistic ambiguity, and generative uncertainty. It further differentiates between hallucinations and jailbreaks in terms of intent and triggering mechanisms. We propose a three-dimensional survey framework: (1) Attack dimension-including template/encoding-based, in-context learning manipulation, reinforcement/adversarial learning, LLM-assisted and fine-tuned attacks, as well as prompt- and image-level perturbations and agent-based transfer in VLMs; (2) Defense dimension-encompassing prompt-level obfuscation, output evaluation, and model-level alignment or fine-tuning; and (3) Evaluation dimension-covering metrics such as Attack Success Rate (ASR), toxicity score, query/time cost, and multimodal Clean Accuracy and Attribute Success Rate. Compared with prior works, this survey spans the full spectrum from text-only to multimodal settings, consolidating shared mechanisms and proposing unified defense principles: variant-consistency and gradient-sensitivity detection at the perception layer, safety-aware decoding and output review at the generation layer, and adversarially augmented preference alignment at the parameter layer. Additionally, we summarize existing multimodal safety benchmarks and discuss future directions, including automated red teaming, cross-modal collaborative defense, and standardized evaluation.


Key Contributions

  • Three-dimensional survey framework spanning attack methods, defense strategies, and evaluation metrics across text-only LLMs and multimodal VLMs
  • Unified defense principles organized across perception layer (variant-consistency and gradient-sensitivity detection), generation layer (safety-aware decoding and output review), and parameter layer (adversarially augmented preference alignment)
  • Differentiation between hallucinations and jailbreaks by intent and triggering mechanism, with coverage of multimodal safety benchmarks and future directions including automated red teaming and standardized evaluation

🛡️ Threat Analysis

Input Manipulation Attack

The survey explicitly covers image-level adversarial perturbations and gradient/adversarial learning attacks targeting VLMs — visual adversarial inputs that manipulate VLM outputs qualify as ML01 input manipulation attacks.


Details

Domains
nlpmultimodal
Model Types
llmvlmtransformermultimodal
Threat Tags
white_boxblack_boxinference_time
Applications
large language modelsvision-language modelschatbotmultimodal ai systems