On the Empirical Power of Goodness-of-Fit Tests in Watermark Detection
Weiqing He , Xiang Li , Tianqi Shang , Li Shen , Weijie Su , Qi Long
Published on arXiv
2510.03944
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Goodness-of-fit tests consistently improve watermark detection power and robustness over existing detectors, with the largest gains in low-temperature settings due to exploiting token repetition patterns.
Large language models (LLMs) raise concerns about content authenticity and integrity because they can generate human-like text at scale. Text watermarks, which embed detectable statistical signals into generated text, offer a provable way to verify content origin. Many detection methods rely on pivotal statistics that are i.i.d. under human-written text, making goodness-of-fit (GoF) tests a natural tool for watermark detection. However, GoF tests remain largely underexplored in this setting. In this paper, we systematically evaluate eight GoF tests across three popular watermarking schemes, using three open-source LLMs, two datasets, various generation temperatures, and multiple post-editing methods. We find that general GoF tests can improve both the detection power and robustness of watermark detectors. Notably, we observe that text repetition, common in low-temperature settings, gives GoF tests a unique advantage not exploited by existing methods. Our results highlight that classic GoF tests are a simple yet powerful and underused tool for watermark detection in LLMs.
Key Contributions
- Systematic empirical evaluation of eight classical goodness-of-fit tests across three watermarking schemes (KGW, Aaronson, Unigram) using three open-source LLMs
- Discovery that text repetition in low-temperature generation gives GoF tests a detection advantage not exploited by existing watermark detectors
- Demonstration that GoF tests improve both detection power and robustness against post-editing attacks compared to standard detection methods
🛡️ Threat Analysis
Focuses on verifying and detecting statistical watermarks embedded in LLM text outputs to authenticate content provenance — directly addresses output integrity and AI-generated content detection.