Toward Safer Diffusion Language Models: Discovery and Mitigation of Priming Vulnerability
Shojiro Yamabe 1,2, Jun Sakuma 1
Published on arXiv
2510.00565
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Injecting affirmative tokens at intermediate denoising steps reliably jailbreaks aligned DLMs; proposed safety alignment substantially mitigates this vulnerability while preserving task performance
Priming Vulnerability
Novel technique introduced
Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional conditioning. However, the safety risks posed by jailbreak attacks that exploit this inference mechanism are not well understood. In this paper, we reveal that DLMs have a critical vulnerability stemming from their iterative denoising process and propose a countermeasure. Specifically, our investigation shows that if an affirmative token for a harmful query appears at an intermediate step, subsequent denoising can be steered toward a harmful response even in aligned models. As a result, simply injecting such affirmative tokens can readily bypass the safety guardrails. Furthermore, we demonstrate that the vulnerability allows existing optimization-based jailbreak attacks to succeed on DLMs. Building on this analysis, we propose a novel safety alignment method tailored to DLMs that trains models to generate safe responses from contaminated intermediate states that contain affirmative tokens. Our experiments indicate that the proposed method significantly mitigates the vulnerability with minimal impact on task performance. Furthermore, our method improves robustness against conventional jailbreak attacks. Our work underscores the need for DLM-specific safety research. Our code is available at https://github.com/mdl-lab/dlm-priming-vulnerability.
Key Contributions
- Discovery of 'priming vulnerability' in DLMs: injecting affirmative tokens at intermediate iterative denoising steps steers aligned models toward harmful outputs, bypassing safety guardrails
- Demonstration that existing optimization-based (gradient-based) jailbreak attacks can successfully exploit this vulnerability on DLMs
- Novel DLM-specific safety alignment method that trains models to produce safe responses from contaminated intermediate states, significantly reducing vulnerability with minimal task performance impact
🛡️ Threat Analysis
Paper demonstrates that existing optimization-based (gradient-based, GCG-style) jailbreak attacks exploit the priming vulnerability to succeed on DLMs — adversarial token-level perturbations at inference time causing harmful outputs.