defense 2025

A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models

Wonje Jeung , Sangyeon Yoon , Yoonjun Cho , Dongjae Jeon , Sangwoo Shin , Hyesoo Hong , Albert No

2 citations · 70 references · arXiv

α

Published on arXiv

2509.23286

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reduces DIJA jailbreak success rate from over 80% to 1.3% on LLaDA-8B-Instruct and 0.0% on Dream-v0-Instruct-7B, with 0% false positives on XSTest benign prompts

A2D (Any-Order, Any-Step Defense)

Novel technique introduced


Diffusion large language models (dLLMs) enable any-order generation, but this flexibility enlarges the attack surface: harmful spans may appear at arbitrary positions, and template-based prefilling attacks such as DIJA bypass response-level refusals. We introduce A2D (Any-Order, Any-Step Defense), a token-level alignment method that aligns dLLMs to emit an [EOS] refusal signal whenever harmful content arises. By aligning safety directly at the token-level under randomized masking, A2D achieves robustness to both any-decoding-order and any-step prefilling attacks under various conditions. It also enables real-time monitoring: dLLMs may begin a response but automatically terminate if unsafe continuation emerges. On safety benchmarks, A2D consistently prevents the generation of harmful outputs, slashing DIJA success rates from over 80% to near-zero (1.3% on LLaDA-8B-Instruct, 0.0% on Dream-v0-Instruct-7B), and thresholded [EOS] probabilities allow early rejection, yielding up to 19.3x faster safe termination.


Key Contributions

  • Introduces A2D, a token-level alignment method that trains dLLMs to emit [EOS] refusal signals at any masked position when harmful content is detected, robust to any decoding order and step
  • Exposes the any-order attack surface of dLLMs and shows safety signals fade rapidly beyond the initial decoding step — analogous to shallow alignment in autoregressive models
  • Introduces the FITS (fill-in-the-sentence) stress-test attack and an early-rejection mechanism using [EOS] probability thresholds, yielding up to 19.3x faster safe termination

🛡️ Threat Analysis


Details

Domains
nlpgenerative
Model Types
llmdiffusion
Threat Tags
inference_timetraining_timeblack_box
Datasets
XSTestAdvBench
Applications
text generationsafety alignmentdiffusion language models