survey 2025

Position: LLM Watermarking Should Align Stakeholders' Incentives for Practical Adoption

Yepeng Liu 1, Xuandong Zhao 2, Dawn Song 3, Gregory W. Wornell 2, Yuheng Bu 1

2 citations · 107 references · arXiv

α

Published on arXiv

2510.18333

Output Integrity Attack

OWASP ML Top 10 — ML09

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Incentive misalignment — not algorithmic limitations — is identified as the primary reason LLM watermarking lacks adoption, with ICW proposed as a rare incentive-aligned solution for trusted-party misuse detection.

In-Context Watermarking (ICW)

Novel technique introduced


Despite progress in watermarking algorithms for large language models (LLMs), real-world deployment remains limited. We argue that this gap stems from misaligned incentives among LLM providers, platforms, and end users, which manifest as four key barriers: competitive risk, detection-tool governance, robustness concerns and attribution issues. We revisit three classes of watermarking through this lens. \emph{Model watermarking} naturally aligns with LLM provider interests, yet faces new challenges in open-source ecosystems. \emph{LLM text watermarking} offers modest provider benefit when framed solely as an anti-misuse tool, but can gain traction in narrowly scoped settings such as dataset de-contamination or user-controlled provenance. \emph{In-context watermarking} (ICW) is tailored for trusted parties, such as conference organizers or educators, who embed hidden watermarking instructions into documents. If a dishonest reviewer or student submits this text to an LLM, the output carries a detectable watermark indicating misuse. This setup aligns incentives: users experience no quality loss, trusted parties gain a detection tool, and LLM providers remain neutral by simply following watermark instructions. We advocate for a broader exploration of incentive-aligned methods, with ICW as an example, in domains where trusted parties need reliable tools to detect misuse. More broadly, we distill design principles for incentive-aligned, domain-specific watermarking and outline future research directions. Our position is that the practical adoption of LLM watermarking requires aligning stakeholder incentives in targeted application domains and fostering active community engagement.


Key Contributions

  • Identifies misaligned stakeholder incentives (competitive risk, detection-tool governance, robustness, attribution) as the primary barrier to real-world LLM watermarking adoption
  • Analyzes three watermarking classes (model watermarking, LLM text watermarking, in-context watermarking) through an incentive-alignment lens
  • Proposes in-context watermarking (ICW) as an incentive-aligned paradigm for trusted parties (e.g., educators, conference organizers) to detect LLM misuse without imposing costs on honest users

🛡️ Threat Analysis

Model Theft

Explicitly covers model watermarking as a class of techniques to protect LLM intellectual property and trace unauthorized model use — a core ML05 application.

Output Integrity Attack

Primarily analyzes LLM text watermarking and proposes in-context watermarking (ICW) for embedding detectable provenance signals in LLM-generated content — directly targeting output integrity and AI-generated content attribution.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Applications
llm content provenanceacademic integrity enforcementdataset de-contaminationip protection