Dual-Space Smoothness for Robust and Balanced LLM Unlearning
Han Yan 1,2, Zheyuan Liu 1, Meng Jiang 1
Published on arXiv
2509.23362
Prompt Injection
OWASP LLM Top 10 — LLM01
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
PRISM outperforms SOTA unlearning baselines on WMDP and MUSE across both conversational-dialogue and continuous-text settings while maintaining robustness under jailbreak and relearning attacks with better balance among unlearning effectiveness, utility preservation, and privacy protection metrics.
PRISM (Probe-guided Iterative Smoothness Minimization)
Novel technique introduced
With the rapid advancement of large language models, Machine Unlearning has emerged to address growing concerns around user privacy, copyright infringement, and overall safety. Yet state-of-the-art (SOTA) unlearning methods often suffer from catastrophic forgetting and metric imbalance, for example by over-optimizing one objective (e.g., unlearning effectiveness, utility preservation, or privacy protection) at the expense of others. In addition, small perturbations in the representation or parameter space can be exploited by relearn and jailbreak attacks. To address these challenges, we propose PRISM, a unified framework that enforces dual-space smoothness in representation and parameter spaces to improve robustness and balance unlearning metrics. PRISM consists of two smoothness optimization stages: (i) a representation space stage that employs a robustly trained probe to defend against jailbreak attacks, and (ii) a parameter-space stage that decouples retain-forget gradient conflicts, reduces imbalance, and smooths the parameter space to mitigate relearning attacks. Extensive experiments on WMDP and MUSE, across conversational-dialogue and continuous-text settings, show that PRISM outperforms SOTA baselines under multiple attacks while achieving a better balance among key metrics.
Key Contributions
- PRISM framework enforcing dual-space smoothness (representation + parameter) via min-max optimization to robustify LLM unlearning against both jailbreak and relearning attacks
- Representation-space stage using a robustly trained probe to detect and block adversarial prompt manipulations that bypass unlearning
- Parameter-space stage that decouples retain-forget gradient conflicts via SAM-style sharpness minimization, reducing catastrophic forgetting while hardening against relearning