Distilling the Thought, Watermarking the Answer: A Principle Semantic Guided Watermark for Large Reasoning Models
Shuliang Liu 1,2, Xingyu Li 1, Hongyi Liu 1, Yibo Yan 1,2, Bingchen Duan 1,2, Qi Zheng 1,2, Dong Fang 3, Lingfeng Su 3, Xuming Hu 1,2
1 The Hong Kong University of Science and Technology (Guangzhou)
Published on arXiv
2601.05144
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
ReasonMark achieves AUC 99.52 for watermark detection while reducing perplexity by 0.35 and raising math accuracy by 0.67 points over state-of-the-art watermarking methods.
ReasonMark
Novel technique introduced
Reasoning Large Language Models (RLLMs) excelling in complex tasks present unique challenges for digital watermarking, as existing methods often disrupt logical coherence or incur high computational costs. Token-based watermarking techniques can corrupt the reasoning flow by applying pseudo-random biases, while semantic-aware approaches improve quality but introduce significant latency or require auxiliary models. This paper introduces ReasonMark, a novel watermarking framework specifically designed for reasoning-intensive LLMs. Our approach decouples generation into an undisturbed Thinking Phase and a watermarked Answering Phase. We propose a Criticality Score to identify semantically pivotal tokens from the reasoning trace, which are distilled into a Principal Semantic Vector (PSV). The PSV then guides a semantically-adaptive mechanism that modulates watermark strength based on token-PSV alignment, ensuring robustness without compromising logical integrity. Extensive experiments show ReasonMark surpasses state-of-the-art methods by reducing text Perplexity by 0.35, increasing translation BLEU score by 0.164, and raising mathematical accuracy by 0.67 points. These advancements are achieved alongside a 0.34% higher watermark detection AUC and stronger robustness to attacks, all with a negligible increase in latency. This work enables the traceable and trustworthy deployment of reasoning LLMs in real-world applications.
Key Contributions
- Decouples LLM generation into an undisturbed Thinking Phase and a watermarked Answering Phase, preserving reasoning coherence while enabling traceability
- Introduces a Criticality Score that identifies semantically pivotal tokens in the reasoning trace, distilled into a Principal Semantic Vector (PSV) that guides adaptive watermark strength
- Achieves superior text quality (PPL reduced by 0.35, BLEU +0.164, math accuracy +0.67) and detection (AUC 99.52) with negligible latency overhead compared to state-of-the-art baselines
🛡️ Threat Analysis
ReasonMark embeds watermarks in LLM-generated answer text (not model weights) to trace content provenance and verify output authenticity — classic output integrity / content watermarking. The paper also evaluates robustness to watermark removal attacks, reinforcing the ML09 framing.