Improving Detection of Watermarked Language Models
Published on arXiv
2508.13131
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Combining watermark scores with a RoBERTa-based AGC classifier via logistic regression boosts detection accuracy from 75% to over 95% on the 20% of prompts affording the least entropy.
Hybrid watermark detection
Novel technique introduced
Watermarking has recently emerged as an effective strategy for detecting the generations of large language models (LLMs). The strength of a watermark typically depends strongly on the entropy afforded by the language model and the set of input prompts. However, entropy can be quite limited in practice, especially for models that are post-trained, for example via instruction tuning or reinforcement learning from human feedback (RLHF), which makes detection based on watermarking alone challenging. In this work, we investigate whether detection can be improved by combining watermark detectors with non-watermark ones. We explore a number of hybrid schemes that combine the two, observing performance gains over either class of detector under a wide range of experimental conditions.
Key Contributions
- Proposes hybrid detection schemes that combine watermark-based detectors with non-watermark classifiers (e.g., RoBERTa-based AGC detectors) for improved LLM-generated text detection
- Demonstrates that hybrid approaches substantially improve detection in low-entropy regimes, boosting accuracy from 75% to over 95% on the lowest-entropy 20% of prompts
- Analyzes the role of entropy in both watermark and non-watermark detection and provides practical deployment recommendations
🛡️ Threat Analysis
Core contribution is improving detection of LLM-generated content (AI-generated text detection) by combining watermark detectors with non-watermark classifiers — watermarks here are embedded in model outputs (text) for provenance/authenticity purposes, not in model weights for ownership, making this squarely ML09 output integrity.