Unlearning Imperative: Securing Trustworthy and Responsible LLMs through Engineered Forgetting
James Jin Kang , Dang Bui , Thanh Pham , Huo-Chong Ling
Published on arXiv
2511.09855
Membership Inference Attack
OWASP ML Top 10 — ML04
Model Inversion Attack
OWASP ML Top 10 — ML03
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
Robust and verifiable LLM unlearning remains unresolved, with key open needs including efficient retraining-free methods, stronger defenses against adversarial recovery, and accountability governance structures.
The growing use of large language models in sensitive domains has exposed a critical weakness: the inability to ensure that private information can be permanently forgotten. Yet these systems still lack reliable mechanisms to guarantee that sensitive information can be permanently removed once it has been used. Retraining from the beginning is prohibitively costly, and existing unlearning methods remain fragmented, difficult to verify, and often vulnerable to recovery. This paper surveys recent research on machine unlearning for LLMs and considers how far current approaches can address these challenges. We review methods for evaluating whether forgetting has occurred, the resilience of unlearned models against adversarial attacks, and mechanisms that can support user trust when model complexity or proprietary limits restrict transparency. Technical solutions such as differential privacy, homomorphic encryption, federated learning, and ephemeral memory are examined alongside institutional safeguards including auditing practices and regulatory frameworks. The review finds steady progress, but robust and verifiable unlearning is still unresolved. Efficient techniques that avoid costly retraining, stronger defenses against adversarial recovery, and governance structures that reinforce accountability are needed if LLMs are to be deployed safely in sensitive applications. By integrating technical and organizational perspectives, this study outlines a pathway toward AI systems that can be required to forget, while maintaining both privacy and public trust.
Key Contributions
- Comprehensive review of machine unlearning methods for LLMs, including evaluation methodologies for verifying true forgetting
- Analysis of adversarial resilience of unlearned models against recovery attacks (membership inference, data extraction)
- Integration of technical solutions (DP, HE, federated learning, ephemeral memory) with institutional safeguards (auditing, regulation) into a unified framework for trustworthy LLM unlearning
🛡️ Threat Analysis
Adversarial recovery of supposedly deleted training data from LLMs — including reconstruction and extraction attacks — is a core threat model discussed in the context of verifying and stress-testing unlearning methods.
Evaluating whether forgetting has truly occurred is primarily assessed via membership inference attacks; the survey covers resilience of unlearned models against adversarial recovery, which is fundamentally a membership inference verification problem.