defense 2025

WaterSearch: A Quality-Aware Search-based Watermarking Framework for Large Language Models

Yukang Lin 1,2, Jiahao Shao 1,2, Shuoran Jiang 1,2, Wentao Zhu 1,2, Bingjie Lu 1,2, Xiangping Wu 1,2, Joanna Siebert 1,2, Qingcai Chen 1,2

0 citations · 56 references · arXiv

α

Published on arXiv

2512.00837

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves 51.01% average quality improvement over state-of-the-art baselines at 95% watermark detectability strength, with gains of 47.78% and 36.47% in short-text and low-entropy scenarios respectively.

WaterSearch

Novel technique introduced


Watermarking acts as a critical safeguard in text generated by Large Language Models (LLMs). By embedding identifiable signals into model outputs, watermarking enables reliable attribution and enhances the security of machine-generated content. Existing approaches typically embed signals by manipulating token generation probabilities. Despite their effectiveness, these methods inherently face a trade-off between detectability and text quality: the signal strength and randomness required for robust watermarking tend to degrade the performance of downstream tasks. In this paper, we design a novel embedding scheme that controls seed pools to facilitate diverse parallel generation of watermarked text. Based on that scheme, we propose WaterSearch, a sentence-level, search-based watermarking framework adaptable to a wide range of existing methods. WaterSearch enhances text quality by jointly optimizing two key aspects: 1) distribution fidelity and 2) watermark signal characteristics. Furthermore, WaterSearch is complemented by a sentence-level detection method with strong attack robustness. We evaluate our method on three popular LLMs across ten diverse tasks. Extensive experiments demonstrate that our method achieves an average performance improvement of 51.01\% over state-of-the-art baselines at a watermark detectability strength of 95\%. In challenging scenarios such as short text generation and low-entropy output generation, our method yields performance gains of 47.78\% and 36.47\%, respectively. Moreover, under different attack senarios including insertion, synonym substitution and paraphrase attasks, WaterSearch maintains high detectability, further validating its robust anti-attack capabilities. Our code is available at \href{https://github.com/Yukang-Lin/WaterSearch}{https://github.com/Yukang-Lin/WaterSearch}.


Key Contributions

  • Novel seed-pool-controlled embedding scheme enabling diverse parallel generation of watermarked text candidates
  • WaterSearch framework that jointly optimizes distribution fidelity and watermark signal characteristics via sentence-level search
  • Sentence-level detection method with robustness to insertion, synonym substitution, and paraphrase attacks

🛡️ Threat Analysis

Output Integrity Attack

Embeds watermark signals in LLM-generated text outputs to enable provenance attribution and AI-generated content detection — classic output integrity / content watermarking. Watermark is in the generated text, not the model weights.


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
MMLUten diverse NLP task benchmarks across three LLMs
Applications
llm text generationai-generated content attributioncontent provenance tracking