defense 2026

Unforgeable Watermarks for Language Models via Robust Signatures

Huijia Lin 1, Kameron Shahabi 1, Min Jae Song 2

0 citations

α

Published on arXiv

2602.15323

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

First provably unforgeable and recoverable LLM text watermarking scheme constructed from standard digital signatures boosted via property-preserving hash functions, preventing false attribution while enabling fine-grained source traceability.

Robust Digital Signatures for Watermarking

Novel technique introduced


Language models now routinely produce text that is difficult to distinguish from human writing, raising the need for robust tools to verify content provenance. Watermarking has emerged as a promising countermeasure, with existing work largely focused on model quality preservation and robust detection. However, current schemes provide limited protection against false attribution. We strengthen the notion of soundness by introducing two novel guarantees: unforgeability and recoverability. Unforgeability prevents adversaries from crafting false positives, texts that are far from any output from the watermarked model but are nonetheless flagged as watermarked. Recoverability provides an additional layer of protection: whenever a watermark is detected, the detector identifies the source text from which the flagged content was derived. Together, these properties strengthen content ownership by linking content exclusively to its generating model, enabling secure attribution and fine-grained traceability. We construct the first undetectable watermarking scheme that is robust, unforgeable, and recoverable with respect to substitutions (i.e., perturbations in Hamming metric). The key technical ingredient is a new cryptographic primitive called robust (or recoverable) digital signatures, which allow verification of messages that are close to signed ones, while preventing forgery of messages that are far from all previously signed messages. We show that any standard digital signature scheme can be boosted to a robust one using property-preserving hash functions (Boyle, LaVigne, and Vaikuntanathan, ITCS 2019).


Key Contributions

  • Introduces unforgeability guarantee: adversaries cannot craft text far from any model output that is falsely flagged as watermarked
  • Introduces recoverability guarantee: whenever a watermark is detected, the detector identifies the source text it was derived from
  • Constructs the first undetectable watermarking scheme that is simultaneously robust, unforgeable, and recoverable using a new cryptographic primitive — robust digital signatures — built from property-preserving hash functions

🛡️ Threat Analysis

Output Integrity Attack

Proposes a watermarking scheme embedded in LLM text outputs to verify content provenance and enable secure attribution — this is output integrity and content authenticity, with specific focus on preventing adversarial false positives (unforgeability) where an adversary crafts text that falsely triggers watermark detection.


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm text provenancecontent attributionai-generated text detection