defense 2026

Antidistillation Fingerprinting

Yixuan Even Xu 1, John Kirchenbauer 2, Yash Savani 1, Asher Trockman 1, Alexander Robey 1, Tom Goldstein 2, Fei Fang 1, J. Zico Kolter 1

0 citations · 34 references · arXiv (Cornell University)

α

Published on arXiv

2602.03812

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

ADFP achieves a significant Pareto improvement over state-of-the-art red-and-green-list baselines, yielding stronger distillation detection with minimal teacher utility degradation even when the student model's architecture is unknown

ADFP (Antidistillation Fingerprinting)

Novel technique introduced


Model distillation enables efficient emulation of frontier large language models (LLMs), creating a need for robust mechanisms to detect when a third-party student model has trained on a teacher model's outputs. However, existing fingerprinting techniques that could be used to detect such distillation rely on heuristic perturbations that impose a steep trade-off between generation quality and fingerprinting strength, often requiring significant degradation of utility to ensure the fingerprint is effectively internalized by the student. We introduce antidistillation fingerprinting (ADFP), a principled approach that aligns the fingerprinting objective with the student's learning dynamics. Building upon the gradient-based framework of antidistillation sampling, ADFP utilizes a proxy model to identify and sample tokens that directly maximize the expected detectability of the fingerprint in the student after fine-tuning, rather than relying on the incidental absorption of the un-targeted biases of a more naive watermark. Experiments on GSM8K and OASST1 benchmarks demonstrate that ADFP achieves a significant Pareto improvement over state-of-the-art baselines, yielding stronger detection confidence with minimal impact on utility, even when the student model's architecture is unknown.


Key Contributions

  • Introduces antidistillation fingerprinting (ADFP), which uses a proxy model and gradient-based logit perturbations aligned with the student's learning dynamics to maximize fingerprint detectability post-distillation
  • Achieves a Pareto improvement over red-and-green-list watermarking baselines — stronger detection confidence with minimal utility degradation on GSM8K and OASST1
  • Demonstrates robustness under black-box conditions where the student model's architecture differs from the proxy

🛡️ Threat Analysis

Model Theft

ADFP embeds a detectable fingerprint signal into teacher LLM outputs specifically designed to transfer into student model weights during fine-tuning, enabling proof that a student model was distilled from the teacher — this is model IP protection and clone detection, the core of ML05. Detection is performed on the student model's behavior, not the original content's provenance.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxtraining_time
Datasets
GSM8KOASST1
Applications
llm distillation detectionmodel ip protectionllm output fingerprinting