benchmark 2026

A Comprehensive Evaluation of LLM Unlearning Robustness under Multi-Turn Interaction

Ruihao Pan , Suhang Wang

0 citations

α

Published on arXiv

2603.00823

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Knowledge apparently forgotten under single-turn evaluation can be recovered by simple multi-turn interactions such as user feedback requests or semantically relevant conversational history, indicating unlearning methods do not achieve genuine knowledge erasure.


Machine unlearning aims to remove the influence of specific training data from pre-trained models without retraining from scratch, and is increasingly important for large language models (LLMs) due to safety, privacy, and legal concerns. Although prior work primarily evaluates unlearning in static, single-turn settings, forgetting robustness under realistic interactive use remains underexplored. In this paper, we study whether unlearning remains stable in interactive environments by examining two common interaction patterns: self-correction and dialogue-conditioned querying. We find that knowledge appearing forgotten in static evaluation can often be recovered through interaction. Although stronger unlearning improves apparent robustness, it often results in behavioral rigidity rather than genuine knowledge erasure. Our findings suggest that static evaluation may overestimate real-world effectiveness and highlight the need for ensuring stable forgetting under interactive settings.


Key Contributions

  • Identifies two realistic interaction patterns (self-correction and dialogue-conditioned querying) that recover knowledge appearing forgotten under static single-turn evaluation.
  • Demonstrates that stronger unlearning methods produce behavioral rigidity rather than genuine knowledge erasure, and that static evaluation systematically overestimates real-world unlearning effectiveness.
  • Proposes interactive evaluation paradigms as a necessary complement to single-turn benchmarks for assessing LLM unlearning robustness.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
large language modelsllm safety unlearninghazardous knowledge removal