defense 2025

DUP: Detection-guided Unlearning for Backdoor Purification in Language Models

Man Hu 1, Yahui Ding 1, Yatao Yang 1, Liangyu Chen 1, Yanhao Jia 2, Shuai Zhao 2

0 citations

α

Published on arXiv

2508.01647

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

DUP achieves state-of-the-art backdoor defense across four attack methods, two PLM architectures, two contemporary LLMs, and three benchmark datasets, while remaining robust against adaptive attacks with feature-level regularization.

DUP (Detection-guided Unlearning for Purification)

Novel technique introduced


As backdoor attacks become more stealthy and robust, they reveal critical weaknesses in current defense strategies: detection methods often rely on coarse-grained feature statistics, and purification methods typically require full retraining or additional clean models. To address these challenges, we propose DUP (Detection-guided Unlearning for Purification), a unified framework that integrates backdoor detection with unlearning-based purification. The detector captures feature-level anomalies by jointly leveraging class-agnostic distances and inter-layer transitions. These deviations are integrated through a weighted scheme to identify poisoned inputs, enabling more fine-grained analysis. Based on the detection results, we purify the model through a parameter-efficient unlearning mechanism that avoids full retraining and does not require any external clean model. Specifically, we innovatively repurpose knowledge distillation to guide the student model toward increasing its output divergence from the teacher on detected poisoned samples, effectively forcing it to unlearn the backdoor behavior. Extensive experiments across diverse attack methods and language model architectures demonstrate that DUP achieves superior defense performance in detection accuracy and purification efficacy. Our code is available at https://github.com/ManHu2025/DUP.


Key Contributions

  • Composite backdoor detector that integrates class-agnostic distance metrics and inter-layer feature trajectory metrics with adaptive layer selection for fine-grained poisoned sample identification
  • Parameter-efficient backdoor purification via LoRA fine-tuning repurposing knowledge distillation to maximize output divergence on detected poisoned samples, avoiding full retraining and external clean models
  • Unified DUP framework achieving state-of-the-art detection and purification across four backdoor attacks, two PLM architectures (e.g., BERT), two LLMs (LLaMA-3.2, Qwen-2.5), and three benchmark datasets

🛡️ Threat Analysis

Model Poisoning

Paper directly targets backdoor/trojan behavior in language models — proposes both detection of backdoor-poisoned inputs and purification of the embedded backdoor through parameter-efficient unlearning, evaluated against BadNets and other backdoor attacks.


Details

Domains
nlp
Model Types
transformerllm
Threat Tags
training_timeinference_timetargeted
Datasets
SST-2
Applications
text classificationlanguage model deployment security