defense 2025

Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation

Anh Tu Ngo , Chuan Song Heng , Nandish Chattopadhyay , Anupam Chattopadhyay

3 citations · 49 references · TNNLS

α

Published on arXiv

2501.02704

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Backdoor-based model watermarks can be restored to up to 100% trigger accuracy after fine-tuning by reintroducing training data alone, provided model parameters do not shift dramatically during fine-tuning.

Data-driven watermark restoration

Novel technique introduced


Deep Neural Networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, towards both adversarial attacks and unintended means such as fine-tuning neural network models. One reason for this is that no complete guarantee of robustness can be assured in the context of backdoor-based watermark. In this paper, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. Our study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in fine-tuning stage to alleviate watermark vanishing.


Key Contributions

  • Comprehensive empirical evaluation of recent backdoor-based watermark schemes' persistence under fine-tuning scenarios
  • Novel data-driven watermark restoration method that reintroduces training data after fine-tuning without exposing the trigger set, recovering trigger accuracy up to 100%
  • Loss landscape visualization analysis explaining why and how watermark restoration succeeds or fails depending on parameter drift

🛡️ Threat Analysis

Model Theft

Backdoor-based watermarks are embedded IN THE MODEL WEIGHTS to prove ownership and protect IP — this is a defense against model theft. The paper evaluates robustness of these model ownership watermarks under fine-tuning and proposes a restoration method, squarely fitting the ML05 model watermarking defense paradigm.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
training_timewhite_box
Datasets
CIFAR-10
Applications
model ip protectionimage classification