defense 2025

Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks

Yaxin Xiao 1, Qingqing Ye 1, Zi Liang 1, Haoyang Li 1, RongHua Li 1, Huadi Zheng 2, Haibo Hu 1,3

0 citations · 44 references · arXiv

α

Published on arXiv

2511.07947

Model Theft

OWASP ML Top 10 — ML05

Key Finding

CFW maintains a watermark success rate of ≥70.15% under combined model extraction and WRK removal attacks, while WRK reduces existing watermark success rates by ≥88.79%.

Class-Feature Watermarks (CFW) / Watermark Removal attacK (WRK)

Novel technique introduced


Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries replicate their functionality through black-box queries. Model watermarking counters MEAs by embedding forensic markers for ownership verification. Current black-box watermarks prioritize MEA survival through representation entanglement, yet inadequately explore resilience against sequential MEAs and removal attacks. Our study reveals that this risk is underestimated because existing removal methods are weakened by entanglement. To address this gap, we propose Watermark Removal attacK (WRK), which circumvents entanglement constraints by exploiting decision boundaries shaped by prevailing sample-level watermark artifacts. WRK effectively reduces watermark success rates by at least 88.79% across existing watermarking benchmarks. For robust protection, we propose Class-Feature Watermarks (CFW), which improve resilience by leveraging class-level artifacts. CFW constructs a synthetic class using out-of-domain samples, eliminating vulnerable decision boundaries between original domain samples and their artifact-modified counterparts (watermark samples). CFW concurrently optimizes both MEA transferability and post-MEA stability. Experiments across multiple domains show that CFW consistently outperforms prior methods in resilience, maintaining a watermark success rate of at least 70.15% in extracted models even under the combined MEA and WRK distortion, while preserving the utility of protected models.


Key Contributions

  • Watermark Removal attacK (WRK): a novel attack that exploits sample-level decision boundary artifacts to bypass entanglement-based defenses, reducing existing watermark success rates by ≥88.79%
  • Class-Feature Watermarks (CFW): a new model watermarking scheme using class-level artifacts (a synthetic out-of-domain class) to eliminate vulnerable decision boundaries exploited by WRK
  • CFW concurrently optimizes MEA transferability and post-MEA stability, achieving ≥70.15% watermark success rate under combined MEA+WRK attacks across multiple domains

🛡️ Threat Analysis

Model Theft

The entire paper centers on model IP protection against model extraction attacks (MEA). CFW embeds watermarks IN THE MODEL WEIGHTS/BEHAVIOR to prove ownership after extraction. WRK is a novel attack against model ownership watermarks. Both contributions squarely target model theft and its forensic countermeasures.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
black_boxtraining_timeinference_time
Datasets
CIFAR-10CIFAR-100ImageNet
Applications
model ip protectionimage classification