attack 2025

Diffusion LLMs are Natural Adversaries for any LLM

David Lüdke , Tom Wollschläger , Paul Ungermann , Stephan Günnemann , Leo Schwinn

3 citations · 52 references · arXiv

α

Published on arXiv

2511.00203

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Diffusion LLM-generated prompts are low-perplexity, diverse jailbreaks with strong black-box transferability to robustly trained and proprietary LLMs, achieved with far fewer samples than prior discrete optimization baselines.

LLM Inpainting Attack

Novel technique introduced


We introduce a novel framework that transforms the resource-intensive (adversarial) prompt optimization problem into an \emph{efficient, amortized inference task}. Our core insight is that pretrained, non-autoregressive generative LLMs, such as Diffusion LLMs, which model the joint distribution over prompt-response pairs, can serve as powerful surrogates for prompt search. This approach enables the direct conditional generation of prompts, effectively replacing costly, per-instance discrete optimization with a small number of parallelizable samples. We provide a probabilistic analysis demonstrating that under mild fidelity assumptions, only a few conditional samples are required to recover high-reward (harmful) prompts. Empirically, we find that the generated prompts are low-perplexity, diverse jailbreaks that exhibit strong transferability to a wide range of black-box target models, including robustly trained and proprietary LLMs. Beyond adversarial prompting, our framework opens new directions for red teaming, automated prompt optimization, and leveraging emerging Flow- and Diffusion-based LLMs.


Key Contributions

  • Reframes adversarial prompt optimization as amortized inference over a Diffusion LLM modeling the joint prompt-response distribution, eliminating per-instance discrete optimization.
  • Probabilistic analysis showing only a small number of conditional samples are needed to recover high-reward harmful prompts under mild fidelity assumptions.
  • Demonstrates strong transferability of generated low-perplexity jailbreaks to a wide range of black-box and proprietary robustly-trained LLMs.

🛡️ Threat Analysis


Details

Domains
nlpgenerative
Model Types
llmdiffusion
Threat Tags
black_boxinference_timetargeted
Applications
llm safety alignmentred teamingjailbreaking chatbots