defense 2026

DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

Yisheng Zhong , Zhengbang Yang , Zhuangdi Zhu

0 citations · 44 references · arXiv

α

Published on arXiv

2601.21283

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DUET outperforms state-of-the-art unlearning baselines in forgetting effectiveness and utility preservation while remaining robust against reverse-prompt attacks, using orders of magnitude less training data.

DUET (Distilled Unlearning from an Efficient Teacher)

Novel technique introduced


LLM unlearning is a technique to remove the impacts of undesirable knowledge from the model without retraining from scratch, which is indispensable towards trustworthy AI. Existing unlearning methods face significant limitations: conventional tuning-based unlearning is computationally heavy and prone to catastrophic forgetting. In contrast, in-contextualized unlearning is lightweight for precise unlearning but vulnerable to prompt removal or reverse engineering attacks. In response, we propose Distilled Unlearning from an Efficient Teacher (DUET), a novel distillation-based unlearning method that combines the merits of these two lines of work. It learns a student model to imitate the behavior of a prompt-steered teacher that effectively refuses undesirable knowledge generation while preserving general domain knowledge. Extensive evaluations on existing benchmarks with our enriched evaluation protocols demonstrate that DUET achieves higher performance in both forgetting and utility preservation, while being orders of magnitude more data-efficient than state-of-the-art unlearning methods.


Key Contributions

  • DUET distillation framework that transfers in-context refusal behavior from a prompt-steered teacher to student LLM parameters via Top-K logit alignment, requiring only query-level data without explicit responses or refusal templates.
  • Robustness against reverse-prompt attacks (un-unlearning) by embedding refusal into model parameters rather than relying on removable in-context prompts.
  • Enriched evaluation protocol expanding forget test sets and demonstrating orders-of-magnitude data efficiency over state-of-the-art unlearning baselines on MUSE-Books and WMDP.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
MUSE-BooksWMDP
Applications
llm unlearningcopyright content removalhazardous knowledge suppression