defense 2025

Attacks and Defenses Against LLM Fingerprinting

Kevin Kurian , Ethan Holland , Sean Oesch

0 citations

α

Published on arXiv

2508.09021

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

RL-optimized 3-query fingerprinting outperforms random 3-query selection; secondary-LLM output filtering reduces fingerprinting accuracy while preserving semantic integrity.

RL-optimized query selection + semantic-preserving output filtering

Novel technique introduced


As large language models are increasingly deployed in sensitive environments, fingerprinting attacks pose significant privacy and security risks. We present a study of LLM fingerprinting from both offensive and defensive perspectives. Our attack methodology uses reinforcement learning to automatically optimize query selection, achieving better fingerprinting accuracy with only 3 queries compared to randomly selecting 3 queries from the same pool. Our defensive approach employs semantic-preserving output filtering through a secondary LLM to obfuscate model identity while maintaining semantic integrity. The defensive method reduces fingerprinting accuracy across tested models while preserving output quality. These contributions show the potential to improve fingerprinting tools capabilities while providing practical mitigation strategies against fingerprinting attacks.


Key Contributions

  • RL-based attack that optimizes query selection to achieve higher LLM fingerprinting accuracy using only 3 queries, outperforming random selection from the same pool
  • Semantic-preserving output filtering defense using a secondary LLM to obfuscate model identity while maintaining output quality
  • Empirical evaluation demonstrating improved attack accuracy and measurable defense effectiveness across multiple LLMs

🛡️ Threat Analysis

Model Theft

LLM fingerprinting targets model identity as intellectual property — determining which model is deployed behind an API is a model IP disclosure threat. The defense directly protects against this by obfuscating model identity, which falls squarely within the model theft / model IP protection scope of ML05.


Details

Domains
nlp
Model Types
llmrl
Threat Tags
black_boxinference_time
Applications
llm api servicesmodel identity protection