defense 2026

LiteGuard: Efficient Task-Agnostic Model Fingerprinting with Enhanced Generalization

Guang Yang 1, Ziye Geng 2, Yihang Chen 2, Changqing Luo 2

0 citations

α

Published on arXiv

2603.24982

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Consistently outperforms state-of-the-art MetaV in both generalization performance and computational efficiency across five representative tasks

LiteGuard

Novel technique introduced


Task-agnostic model fingerprinting has recently gained increasing attention due to its ability to provide a universal framework applicable across diverse model architectures and tasks. The current state-of-the-art method, MetaV, ensures generalization by jointly training a set of fingerprints and a neural-network-based global verifier using two large and diverse model sets: one composed of pirated models (i.e., the protected model and its variants) and the other comprising independently trained models. However, publicly available models are scarce in many real-world domains, and constructing such model sets requires intensive training and massive computational resources, posing a significant barrier to deployment. Reducing the number of models can alleviate the overhead, but increases the risk of overfitting, a problem further exacerbated by MetaV's entangled design, in which all fingerprints and the global verifier are jointly trained. This overfitting issue compromises the generalization capability for verifying unseen models. In this paper, we propose LiteGuard, an efficient task-agnostic fingerprinting framework that attains enhanced generalization while significantly lowering computational cost. Specifically, LiteGuard introduces two key innovations: (i) a checkpoint-based model set augmentation strategy that enriches model diversity by leveraging intermediate model snapshots captured during training of each pirated and independently trained model, thereby alleviating the need to train a large number of such models, and (ii) a local verifier architecture that pairs each fingerprint with a lightweight local verifier, thereby reducing parameter entanglement and mitigating overfitting. Extensive experiments across five representative tasks show that LiteGuard consistently outperforms MetaV in both generalization performance and computational efficiency.


Key Contributions

  • Checkpoint-based model set augmentation strategy that enriches model diversity by leveraging intermediate training snapshots without additional computational cost
  • Local verifier architecture pairing each fingerprint with a lightweight verifier to reduce parameter entanglement and mitigate overfitting
  • Task-agnostic fingerprinting framework achieving better generalization than MetaV with significantly lower computational requirements

🛡️ Threat Analysis

Model Theft

The paper's core contribution is model fingerprinting to protect DNN intellectual property and verify ownership of suspect models — this directly addresses model theft defense. The fingerprints are embedded in/triggered by the model to prove ownership if stolen.


Details

Domains
visionnlpgraph
Model Types
cnntransformergnntraditional_ml
Threat Tags
black_boxinference_time
Applications
model ownership verificationintellectual property protectionpirated model detection