Backdoor Token Unlearning: Exposing and Defending Backdoors in Pretrained Language Models
Peihai Jiang 1, Xixiang Lyu 1, Yige Li 2, Jing Ma 1
Published on arXiv
2501.03272
Model Poisoning
OWASP ML Top 10 — ML10
Key Finding
BTU substantially reduces backdoor attack success rates across four attack types and three datasets while preserving downstream task accuracy during supervised fine-tuning of pretrained language models.
BTU (Backdoor Token Unlearning)
Novel technique introduced
Supervised fine-tuning has become the predominant method for adapting large pretrained models to downstream tasks. However, recent studies have revealed that these models are vulnerable to backdoor attacks, where even a small number of malicious samples can successfully embed backdoor triggers into the model. While most existing defense methods focus on post-training backdoor defense, efficiently defending against backdoor attacks during training phase remains largely unexplored. To address this gap, we propose a novel defense method called Backdoor Token Unlearning (BTU), which proactively detects and neutralizes trigger tokens during the training stage. Our work is based on two key findings: 1) backdoor learning causes distinctive differences between backdoor token parameters and clean token parameters in word embedding layers, and 2) the success of backdoor attacks heavily depends on backdoor token parameters. The BTU defense leverages these properties to identify aberrant embedding parameters and subsequently removes backdoor behaviors using a fine-grained unlearning technique. Extensive evaluations across three datasets and four types of backdoor attacks demonstrate that BTU effectively defends against these threats while preserving the model's performance on primary tasks. Our code is available at https://github.com/XDJPH/BTU.
Key Contributions
- Identifies that backdoor learning causes distinctive differences in word embedding parameters between trigger tokens and clean tokens, and that backdoor activation depends heavily on these trigger token embeddings
- Proposes BTU (Backdoor Token Unlearning), a two-stage anti-backdoor training defense: (1) detect top-α% aberrant embedding parameters as backdoor-related, (2) replace them with benign padding token embeddings
- Demonstrates effectiveness across three datasets and four NLP backdoor attack types (rare word, style, syntactic, etc.) with minimal clean task accuracy degradation
🛡️ Threat Analysis
Primary contribution is a defense against backdoor/trojan attacks in pretrained language models — BTU detects trigger tokens via anomalous embedding parameters and neutralizes them using a fine-grained unlearning technique during training, directly targeting the hidden trigger-based behavior that defines ML10.