α

Published on arXiv

2508.19843

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

Existing LLM fingerprinting methods exhibit significant vulnerabilities under realistic post-development modifications such as fine-tuning and quantization, revealing critical unresolved challenges in the field

LeaFBench

Novel technique introduced


The broad capabilities and substantial resources required to train Large Language Models (LLMs) make them valuable intellectual property, yet they remain vulnerable to copyright infringement, such as unauthorized use and model theft. LLM fingerprinting, a non-intrusive technique that compares the distinctive features (i.e., fingerprint) of LLMs to identify whether an LLM is derived from another, offers a promising solution to copyright auditing. However, its reliability remains uncertain due to the prevalence of diverse model modifications and the lack of standardized evaluation. In this SoK, we present the first comprehensive study of the emerging LLM fingerprinting. We introduce a unified framework and taxonomy that structures the field: white-box methods are classified based on their feature source as static, forward-pass, or backward-pass fingerprinting, while black-box methods are distinguished by their query strategy as either untargeted or targeted. Furthermore, we propose LeaFBench, the first systematic benchmark for evaluating LLM fingerprinting under realistic deployment scenarios. Built upon 7 mainstream foundation models and comprising 149 distinct model instances, LeaFBench integrates 13 representative post-development techniques, spanning both parameter-altering methods (e.g., fine-tuning, quantization) and parameter-independent techniques (e.g., system prompts, RAG). Extensive experiments on LeaFBench reveal the strengths and weaknesses of existing methods, thereby outlining future research directions and critical open problems in this emerging field. The code is available at https://github.com/shaoshuo-ss/LeaFBench.


Key Contributions

  • First unified framework and taxonomy of LLM fingerprinting methods: white-box (static/forward-pass/backward-pass) and black-box (untargeted/targeted)
  • LeaFBench: first systematic benchmark with 7 foundation models, 149 model instances, and 13 post-development techniques (fine-tuning, quantization, system prompts, RAG) for evaluating fingerprinting robustness
  • Extensive empirical evaluation revealing critical weaknesses of existing fingerprinting methods under realistic deployment scenarios and identifying open research challenges

🛡️ Threat Analysis

Model Theft

LLM fingerprinting is a direct defense against model theft — the paper's core goal is copyright auditing by identifying whether an LLM is derived from a source model, enabling ownership verification against unauthorized use and model stealing.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Datasets
LeaFBench (7 foundation models, 149 model instances)
Applications
llm copyright auditingmodel ownership verificationmodel ip protection