defense 2026

dgMARK: Decoding-Guided Watermarking for Diffusion Language Models

Pyo Min Hong 1, Albert No 2

0 citations · arXiv

α

Published on arXiv

2601.22985

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

dgMARK embeds detectable watermarks in dLLM-generated text by exploiting practical order-sensitivity of unmasking without altering token probabilities, achieving robustness against common post-editing attacks via a sliding-window detector.

dgMARK

Novel technique introduced


We propose dgMARK, a decoding-guided watermarking method for discrete diffusion language models (dLLMs). Unlike autoregressive models, dLLMs can generate tokens in arbitrary order. While an ideal conditional predictor would be invariant to this order, practical dLLMs exhibit strong sensitivity to the unmasking order, creating a new channel for watermarking. dgMARK steers the unmasking order toward positions whose high-reward candidate tokens satisfy a simple parity constraint induced by a binary hash, without explicitly reweighting the model's learned probabilities. The method is plug-and-play with common decoding strategies (e.g., confidence, entropy, and margin-based ordering) and can be strengthened with a one-step lookahead variant. Watermarks are detected via elevated parity-matching statistics, and a sliding-window detector ensures robustness under post-editing operations including insertion, deletion, substitution, and paraphrasing.


Key Contributions

  • Introduces dgMARK, a decoding-guided watermarking scheme for discrete diffusion LLMs that steers the token unmasking order via a parity constraint on high-reward candidates — without reweighting the model's learned probabilities.
  • Provides a sliding-window detector that remains robust to post-editing operations including insertion, deletion, substitution, and paraphrasing.
  • Demonstrates plug-and-play compatibility with common dLLM decoding strategies (confidence, entropy, margin-based) and a one-step lookahead variant for stronger watermarks.

🛡️ Threat Analysis

Output Integrity Attack

dgMARK embeds watermarks in the text OUTPUT of diffusion language models (not in model weights) by biasing the decoding order via a binary hash parity constraint, enabling detection of AI-generated text provenance. This is content watermarking for output integrity and traceability — the canonical ML09 use case.


Details

Domains
nlpgenerative
Model Types
diffusionllmtransformer
Threat Tags
inference_time
Applications
ai-generated text detectiontext provenancecontent attributiondiscrete diffusion language model outputs