defense 2025

DRAGON: Guard LLM Unlearning in Context via Negative Detection and Reasoning

Yaxuan Wang 1,2, Chris Yuhao Liu 1, Quan Liu 2, Jinglong Pang 1, Wei Wei 2, Yujia Bao 2, Yang Liu 1

2 citations · 71 references · arXiv

α

Published on arXiv

2511.05784

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

DRAGON achieves strong unlearning capability across three representative tasks without requiring retain data or base model fine-tuning, while maintaining general language model utility and supporting continual unlearning

DRAGON

Novel technique introduced


Unlearning in Large Language Models (LLMs) is crucial for protecting private data and removing harmful knowledge. Most existing approaches rely on fine-tuning to balance unlearning efficiency with general language capabilities. However, these methods typically require training or access to retain data, which is often unavailable in real world scenarios. Although these methods can perform well when both forget and retain data are available, few works have demonstrated equivalent capability in more practical, data-limited scenarios. To overcome these limitations, we propose Detect-Reasoning Augmented GeneratiON (DRAGON), a systematic, reasoning-based framework that utilizes in-context chain-of-thought (CoT) instructions to guard deployed LLMs before inference. Instead of modifying the base model, DRAGON leverages the inherent instruction-following ability of LLMs and introduces a lightweight detection module to identify forget-worthy prompts without any retain data. These are then routed through a dedicated CoT guard model to enforce safe and accurate in-context intervention. To robustly evaluate unlearning performance, we introduce novel metrics for unlearning performance and the continual unlearning setting. Extensive experiments across three representative unlearning tasks validate the effectiveness of DRAGON, demonstrating its strong unlearning capability, scalability, and applicability in practical scenarios.


Key Contributions

  • DRAGON: a training-free, in-context LLM unlearning framework using chain-of-thought instructions that guards deployed LLMs without modifying base model weights or requiring retain data
  • A robust detection module combining a trained scoring model with a similarity-based metric into a unified confidence score, enabling adaptive thresholding against distributional shifts and paraphrased adversarial attacks
  • Novel evaluation metrics and a continual unlearning benchmark setting for practical data-limited scenarios validated across three representative unlearning tasks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
TOFUWMDP
Applications
llm private data protectionharmful knowledge removalcontinual unlearning