defense 2026

Fingerprinting Deep Neural Networks for Ownership Protection: An Analytical Approach

Guang Yang 1, Ziye Geng 2, Yihang Chen 2, Changqing Luo 2

0 citations

α

Published on arXiv

2603.21411

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Consistently outperforms prior fingerprinting methods in ownership verification across diverse architectures and model modification attacks

AnaFP

Novel technique introduced


Adversarial-example-based fingerprinting approaches, which leverage the decision boundary characteristics of deep neural networks (DNNs) to craft fingerprints, have proven effective for model ownership protection. However, a fundamental challenge remains unresolved: how far a fingerprint should be placed from the decision boundary to simultaneously satisfy two essential properties, i.e., robustness and uniqueness, for effective and reliable ownership protection. Despite the importance of the fingerprint-to-boundary distance, existing works lack a theoretical solution and instead rely on empirical heuristics, which may violate either robustness or uniqueness properties. We propose AnaFP, an analytical fingerprinting scheme that constructs fingerprints under theoretical guidance. Specifically, we formulate fingerprint generation as controlling the fingerprint-to-boundary distance through a tunable stretch factor. To ensure both robustness and uniqueness, we mathematically formalize these properties that determine the lower and upper bounds of the stretch factor. These bounds jointly define an admissible interval within which the stretch factor must lie, thereby establishing a theoretical connection between the two constraints and the fingerprint-to-boundary distance. To enable practical fingerprint generation, we approximate the original (infinite) sets of pirated and independently trained models using two finite surrogate model pools and employ a quantile-based relaxation strategy to relax the derived bounds. Due to the circular dependency between the lower bound and the stretch factor, we apply grid search over the admissible interval to determine the most feasible stretch factor. Extensive experimental results show that AnaFP consistently outperforms prior methods, achieving effective ownership verification across diverse model architectures and model modification attacks.


Key Contributions

  • Theoretical framework for determining optimal fingerprint-to-boundary distance that satisfies both robustness and uniqueness properties
  • Analytical derivation of stretch factor bounds using mathematical formalization of robustness and uniqueness constraints
  • Practical implementation using surrogate model pools, quantile-based relaxation, and grid search to handle infinite model sets

🛡️ Threat Analysis

Model Theft

Paper proposes a model fingerprinting technique to prove ownership of DNN models and detect pirated copies. Fingerprints are embedded using adversarial perturbations to create model-specific verification patterns that survive model modification attacks. This is a defense against model theft (IP protection).


Details

Domains
vision
Model Types
cnn
Threat Tags
training_timeinference_time
Applications
model ip protectionownership verification