defense 2025

Re-Key-Free, Risky-Free: Adaptable Model Usage Control

Zihan Wang 1,2,3, Zhongkui Ma 1, Xinguo Feng 1, Chuan Yan 1, Dong Liu 4, Ruoxi Sun 2, Derui Wang 2, Minhui Xue 2,5, Guangdong Bai 3

1 citations · 48 references · arXiv

α

Published on arXiv

2511.18772

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Unauthorized usage accuracy collapses to near-random guessing (1.01% on CIFAR-100) after fine-tuning, versus up to 87.01% without ADALOC, while authorized users retain full task performance.

ADALOC

Novel technique introduced


Deep neural networks (DNNs) have become valuable intellectual property of model owners, due to the substantial resources required for their development. To protect these assets in the deployed environment, recent research has proposed model usage control mechanisms to ensure models cannot be used without proper authorization. These methods typically lock the utility of the model by embedding an access key into its parameters. However, they often assume static deployment, and largely fail to withstand continual post-deployment model updates, such as fine-tuning or task-specific adaptation. In this paper, we propose ADALOC, to endow key-based model usage control with adaptability during model evolution. It strategically selects a subset of weights as an intrinsic access key, which enables all model updates to be confined to this key throughout the evolution lifecycle. ADALOC enables using the access key to restore the keyed model to the latest authorized states without redistributing the entire network (i.e., adaptation), and frees the model owner from full re-keying after each model update (i.e., lock preservation). We establish a formal foundation to underpin ADALOC, providing crucial bounds such as the errors introduced by updates restricted to the access key. Experiments on standard benchmarks, such as CIFAR-100, Caltech-256, and Flowers-102, and modern architectures, including ResNet, DenseNet, and ConvNeXt, demonstrate that ADALOC achieves high accuracy under significant updates while retaining robust protections. Specifically, authorized usages consistently achieve strong task-specific performance, while unauthorized usage accuracy drops to near-random guessing levels (e.g., 1.01% on CIFAR-100), compared to up to 87.01% without ADALOC. This shows that ADALOC can offer a practical solution for adaptive and protected DNN deployment in evolving real-world scenarios.


Key Contributions

  • ADALOC framework that designates a compact subset of model weights as an intrinsic access key, confining all subsequent model updates (fine-tuning, adaptation) to this key subset to preserve the lock
  • Formal bounds on the accuracy error introduced by restricting model updates to the key-weight subset, underpinning the adaptability-utility tradeoff
  • Demonstrated lock preservation across continual fine-tuning cycles, reducing unauthorized accuracy to ~1.01% on CIFAR-100 while maintaining authorized user performance

🛡️ Threat Analysis

Model Theft

ADALOC is a model IP protection defense: it embeds an intrinsic access key into a selected subset of model weights so that the model is functionally useless without the key, directly defending against unauthorized use of deployed model IP through a novel mechanism robust to post-deployment fine-tuning.


Details

Domains
vision
Model Types
cnn
Threat Tags
training_timeinference_time
Datasets
CIFAR-100Caltech-256Flowers-102
Applications
image classificationmodel ip protectiondnn deployment control