Injection, Attack and Erasure: Revocable Backdoor Attacks via Machine Unlearning
Baogang Song , Dongdong Zhao , Jianwen Xiang , Qiben Xu , Zizhuo Yu
Published on arXiv
2510.13322
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
Maintains ASR comparable to state-of-the-art backdoor attacks on CIFAR-10 and ImageNet while achieving effective post-attack backdoor erasure through unlearning, leaving no persistent detectable traces.
Revocable Backdoor Attack (RBA)
Novel technique introduced
Backdoor attacks pose a persistent security risk to deep neural networks (DNNs) due to their stealth and durability. While recent research has explored leveraging model unlearning mechanisms to enhance backdoor concealment, existing attack strategies still leave persistent traces that may be detected through static analysis. In this work, we introduce the first paradigm of revocable backdoor attacks, where the backdoor can be proactively and thoroughly removed after the attack objective is achieved. We formulate the trigger optimization in revocable backdoor attacks as a bilevel optimization problem: by simulating both backdoor injection and unlearning processes, the trigger generator is optimized to achieve a high attack success rate (ASR) while ensuring that the backdoor can be easily erased through unlearning. To mitigate the optimization conflict between injection and removal objectives, we employ a deterministic partition of poisoning and unlearning samples to reduce sampling-induced variance, and further apply the Projected Conflicting Gradient (PCGrad) technique to resolve the remaining gradient conflicts. Experiments on CIFAR-10 and ImageNet demonstrate that our method maintains ASR comparable to state-of-the-art backdoor attacks, while enabling effective removal of backdoor behavior after unlearning. This work opens a new direction for backdoor attack research and presents new challenges for the security of machine learning systems.
Key Contributions
- First revocable backdoor attack paradigm enabling attackers to proactively and thoroughly erase the backdoor after achieving attack objectives, defeating post-hoc static analysis defenses
- Bilevel optimization formulation that jointly optimizes trigger generation for high attack success rate (ASR) and easy removal via unlearning
- Deterministic sample partitioning and PCGrad technique to resolve gradient conflicts between backdoor injection and removal objectives
🛡️ Threat Analysis
Core contribution is a novel backdoor injection paradigm — the attacker poisons training data to embed a trigger-based backdoor, then uses machine unlearning to proactively erase it after the attack goal is achieved, improving evasion of static analysis defenses.