benchmark 2025

Ideal Attribution and Faithful Watermarks for Language Models

Min Jae Song 1, Kameron Shahabi 2

0 citations · 26 references · arXiv

α

Published on arXiv

2512.07038

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

A unified formal framework clarifying which attribution guarantees are achievable in idealized settings, serving as ground truth for evaluating and designing LLM watermarking schemes.

Ideal Attribution Mechanisms / Ledger

Novel technique introduced


We introduce ideal attribution mechanisms, a formal abstraction for reasoning about attribution decisions over strings. At the core of this abstraction lies the ledger, an append-only log of the prompt-response interaction history between a model and its user. Each mechanism produces deterministic decisions based on the ledger and an explicit selection criterion, making it well-suited to serve as a ground truth for attribution. We frame the design goal of watermarking schemes as faithful representation of ideal attribution mechanisms. This novel perspective brings conceptual clarity, replacing piecemeal probabilistic statements with a unified language for stating the guarantees of each scheme. It also enables precise reasoning about desiderata for future watermarking schemes, even when no current construction achieves them, since the ideal functionalities are specified first. In this way, the framework provides a roadmap that clarifies which guarantees are attainable in an idealized setting and worth pursuing in practice.


Key Contributions

  • Introduces 'ideal attribution mechanisms' — a formal abstraction defining deterministic attribution decisions over strings via an append-only ledger of prompt–response histories
  • Reframes the design goal of watermarking schemes as 'faithful representation' of these ideal mechanisms, replacing ad hoc probabilistic statements with a unified guarantee language
  • Provides a formal roadmap of attainable guarantees in idealized settings, enabling precise reasoning about desiderata for future watermarking constructions

🛡️ Threat Analysis

Output Integrity Attack

The paper is fundamentally about watermarking LLM text outputs for attribution and content provenance — core ML09 territory. The 'ideal attribution mechanism' and 'ledger' formalism provide a theoretical foundation for specifying and evaluating watermarking schemes that track which model/user produced a given string.


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
llm text attributioncontent provenancewatermarking