Are Robust LLM Fingerprints Adversarially Robust?
Anshul Nasery 1, Edoardo Contente 2, Alkin Kaz 3, Pramod Viswanath 3,2, Sewoong Oh 1,2
Published on arXiv
2509.26598
Model Theft
OWASP ML Top 10 — ML05
Model Theft
OWASP LLM Top 10 — LLM10
Key Finding
Adaptive attacks achieve 100% attack success rate against 9 of 10 evaluated LLM fingerprinting schemes (65% for DSWatermark) while preserving model utility
Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly focused on benign perturbations such as incremental fine-tuning, model merging, and prompting. Lack of systematic investigations into {\em adversarial robustness} against a malicious model host leaves current systems vulnerable. To bridge this gap, we first define a concrete, practical threat model against model fingerprinting. We then take a critical look at existing model fingerprinting schemes to identify their fundamental vulnerabilities. Based on these, we develop adaptive adversarial attacks tailored for each vulnerability, and demonstrate that these can bypass model authentication completely for ten recently proposed fingerprinting schemes while maintaining high utility of the model for the end users. Our work encourages fingerprint designers to adopt adversarial robustness by design. We end with recommendations for future fingerprinting methods.
Key Contributions
- Identifies four fundamental vulnerability classes shared across families of LLM fingerprinting schemes: verbatim verification, overconfidence, unnatural queries, and statistical signatures
- Develops adaptive adversarial attacks tailored to each vulnerability class
- Demonstrates near-perfect attack success rates (up to 100%) against 9 of 10 recently proposed fingerprinting schemes while maintaining model utility for end users
🛡️ Threat Analysis
Attacks model fingerprinting schemes whose purpose is to prove model ownership — these are defenses against model theft. By defeating them, a malicious host evades authentication and can deny unauthorized model use, directly undermining model IP protection.