HoneypotNet: Backdoor Attacks Against Model Extraction
Yixu Wang 1,2, Tianle Gu 2,3, Yan Teng 2, Yingchun Wang 2, Xingjun Ma 1,2
Published on arXiv
2501.01090
Model Theft
OWASP ML Top 10 — ML05
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
HoneypotNet injects backdoors into substitute models with high attack success rate across four benchmark datasets while maintaining the victim model's original classification performance
HoneypotNet
Novel technique introduced
Model extraction attacks are one type of inference-time attacks that approximate the functionality and performance of a black-box victim model by launching a certain number of queries to the model and then leveraging the model's predictions to train a substitute model. These attacks pose severe security threats to production models and MLaaS platforms and could cause significant monetary losses to the model owners. A body of work has proposed to defend machine learning models against model extraction attacks, including both active defense methods that modify the model's outputs or increase the query overhead to avoid extraction and passive defense methods that detect malicious queries or leverage watermarks to perform post-verification. In this work, we introduce a new defense paradigm called attack as defense which modifies the model's output to be poisonous such that any malicious users that attempt to use the output to train a substitute model will be poisoned. To this end, we propose a novel lightweight backdoor attack method dubbed HoneypotNet that replaces the classification layer of the victim model with a honeypot layer and then fine-tunes the honeypot layer with a shadow model (to simulate model extraction) via bi-level optimization to modify its output to be poisonous while remaining the original performance. We empirically demonstrate on four commonly used benchmark datasets that HoneypotNet can inject backdoors into substitute models with a high success rate. The injected backdoor not only facilitates ownership verification but also disrupts the functionality of substitute models, serving as a significant deterrent to model extraction attacks.
Key Contributions
- Introduces an 'attack as defense' paradigm that poisons model extraction attempts by injecting backdoors into substitute models trained on the victim's outputs
- Proposes HoneypotNet, which replaces the victim model's classification layer with a honeypot layer fine-tuned via bi-level optimization to produce poisonous outputs while preserving original accuracy
- Demonstrates dual utility of the injected backdoor: post-hoc ownership verification of stolen substitute models AND functional disruption of those models
🛡️ Threat Analysis
The primary threat being defended is model extraction — adversaries querying a black-box victim model to train a functionally equivalent substitute. HoneypotNet is explicitly an anti-extraction defense that also enables ownership verification of stolen substitute models, both core ML05 concerns.
The defense mechanism IS a backdoor attack: HoneypotNet injects hidden backdoors into any substitute model an adversary trains using the victim model's outputs. The paper proposes a novel backdoor injection method (honeypot layer + bi-level optimization) as its technical contribution, making ML10 a primary — not secondary — category.