defense 2025

Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization

Wenhan Wu 1, Zheyuan Liu 2, Chongyang Gao 1, Ren Wang 3, Kaize Ding 1

1 citations · 68 references · arXiv

α

Published on arXiv

2509.20230

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

StableUN significantly reduces relearning attack success and jailbreak rates versus conventional unlearning baselines on WMDP and MUSE while maintaining competitive downstream utility.

StableUN

Novel technique introduced


Current LLM unlearning methods face a critical security vulnerability that undermines their fundamental purpose: while they appear to successfully remove sensitive or harmful knowledge, this ``forgotten" information remains precariously recoverable through relearning attacks. We identify that the root cause is that conventional methods optimizing the forgetting loss at individual data points will drive model parameters toward sharp minima in the loss landscape. In these unstable regions, even minimal parameter perturbations can drastically alter the model's behaviors. Consequently, relearning attacks exploit this vulnerability by using just a few fine-tuning samples to navigate the steep gradients surrounding these unstable regions, thereby rapidly recovering knowledge that was supposedly erased. This exposes a critical robustness gap between apparent unlearning and actual knowledge removal. To address this issue, we propose StableUN, a bi-level feedback-guided optimization framework that explicitly seeks more stable parameter regions via neighborhood-aware optimization. It integrates forgetting feedback, which uses adversarial perturbations to probe parameter neighborhoods, with remembering feedback to preserve model utility, aligning the two objectives through gradient projection. Experiments on WMDP and MUSE benchmarks demonstrate that our method is significantly more robust against both relearning and jailbreaking attacks while maintaining competitive utility performance.


Key Contributions

  • Identifies that conventional LLM unlearning drives parameters toward sharp loss-landscape minima, making forgotten knowledge trivially recoverable via relearning attacks with minimal fine-tuning.
  • Proposes StableUN, a bi-level feedback-guided framework combining adversarial forgetting feedback (probing parameter neighborhoods) and remembering feedback (preserving utility) via gradient projection to seek flat, stable minima.
  • Demonstrates significantly improved robustness against both relearning attacks and jailbreaking on the WMDP and MUSE benchmarks while maintaining competitive model utility.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtraining_time
Datasets
WMDPMUSE
Applications
llm unlearningharmful knowledge removalsensitive training data removal