attack 2025

DITTO: A Spoofing Attack Framework on Watermarked LLMs via Knowledge Distillation

Hyeseon An 1, Shinwoo Park 1, Suyeon Woo 2, Yo-Sub Han 1

0 citations · 30 references · arXiv

α

Published on arXiv

2510.10987

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Demonstrates that an attacker can forge the watermark signal of a trusted victim LLM under a black-box setting using knowledge distillation, enabling disinformation to be falsely attributed to reputable sources.

DITTO

Novel technique introduced


The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this assumption is dangerously flawed. We introduce the threat of watermark spoofing, a sophisticated attack that allows a malicious model to generate text containing the authentic-looking watermark of a trusted, victim model. This enables the seamless misattribution of harmful content, such as disinformation, to reputable sources. The key to our attack is repurposing watermark radioactivity, the unintended inheritance of data patterns during fine-tuning, from a discoverable trait into an attack vector. By distilling knowledge from a watermarked teacher model, our framework allows an attacker to steal and replicate the watermarking signal of the victim model. This work reveals a critical security gap in text authorship verification and calls for a paradigm shift towards technologies capable of distinguishing authentic watermarks from expertly imitated ones. Our code is available at https://github.com/hsannn/ditto.git.


Key Contributions

  • Introduces the threat of watermark spoofing, demonstrating that a malicious model can generate text bearing an authentic-looking watermark of a trusted victim LLM
  • Proposes DITTO, a knowledge distillation-based framework that repurposes watermark radioactivity as an attack vector to steal and replicate a victim model's watermarking signal
  • Exposes a fundamental security gap in LLM watermarking's core assumption that a watermark uniquely proves authorship by a specific model

🛡️ Threat Analysis

Output Integrity Attack

LLM text watermarks are output-level provenance signals (ML09 territory). DITTO attacks this system by spoofing/forging the watermark — an attacker's model learns to produce text carrying the victim model's authentic watermark signature, undermining content attribution and output integrity. This is a watermark forgery attack on content provenance, not model IP theft.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
llm text watermarkingcontent provenance verificationai-generated text attribution