defense arXiv Sep 26, 2025 · Sep 2025
Nakyeong Yang, Dong-Kyum Kim, Jea Kwon et al. · Seoul National University · Max Planck Institute for Security and Privacy
Defends LLM unlearning against adversarial relearning attacks by suppressing spurious neurons that hide rather than erase private knowledge
Sensitive Information Disclosure nlp
Large language models trained on web-scale data can memorize private or sensitive knowledge, raising significant privacy risks. Although some unlearning methods mitigate these risks, they remain vulnerable to "relearning" during subsequent training, allowing a substantial portion of forgotten knowledge to resurface. In this paper, we show that widely used unlearning methods cause shallow alignment: instead of faithfully erasing target knowledge, they generate spurious unlearning neurons that amplify negative influence to hide it. To overcome this limitation, we introduce Ssiuu, a new class of unlearning methods that employs attribution-guided regularization to prevent spurious negative influence and faithfully remove target knowledge. Experimental results confirm that our method reliably erases target knowledge and outperforms strong baselines across two practical retraining scenarios: (1) adversarial injection of private data, and (2) benign attack using an instruction-following benchmark. Our findings highlight the necessity of robust and faithful unlearning methods for safe deployment of language models.
llm transformer Seoul National University · Max Planck Institute for Security and Privacy