attack 2026

Beyond A Fixed Seal: Adaptive Stealing Watermark in Large Language Models

Shuhao Zhang 1, Yuli Chen 1, Jiale Han 2, Bo Cheng 1, Jiabao Ma 1

0 citations

α

Published on arXiv

2604.10893

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Reduces watermark detection AUC below 0.55 for three different watermarking schemes using 10,000 watermarked text samples

Adaptive Stealing (AS)

Novel technique introduced


Watermarking provides a critical safeguard for large language model (LLM) services by facilitating the detection of LLM-generated text. Correspondingly, stealing watermark algorithms (SWAs) derive watermark information from watermarked texts generated by victim LLMs to craft highly targeted adversarial attacks, which compromise the reliability of watermarks. Existing SWAs rely on fixed strategies, overlooking the non-uniform distribution of stolen watermark information and the dynamic nature of real-world LLM generation processes. To address these limitations, we propose Adaptive Stealing (AS), a novel SWA featuring enhanced design flexibility through Position-Based Seal Construction and Adaptive Selection modules. AS operates by defining multiple attack perspectives derived from distinct activation states of contextually ordered tokens. During attack execution, AS dynamically selects the optimal perspective based on watermark compatibility, generation priority, and dynamic generation relevance. Our experiments demonstrate that AS significantly increases steal efficiency against target watermarks under identical experimental conditions. These findings highlight the need for more robust LLM watermarks to withstand potential attacks. We release our code to the community for future research\footnote{https://github.com/DrankXs/AdaptiveStealingWatermark}.


Key Contributions

  • Position-Based Seal Construction module that generates diverse attack perspectives from token position activation patterns
  • Adaptive Selection module that dynamically chooses optimal seal based on watermark compatibility, generation priority, and context relevance
  • Demonstrates ability to nearly completely scrub three watermark schemes (AUC < 0.55) using only 10,000 query samples

🛡️ Threat Analysis

Output Integrity Attack

Attacks LLM text watermarking schemes by stealing watermark patterns from victim texts to forge watermarks or scrub detection signals — this is an output integrity attack targeting content watermarks.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
llm text watermark removalwatermark forgerycontent attribution evasion