Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning
Zhaoyang Chu 1, Yao Wan 1, Zhikun Zhang 2, Di Wang 3, Zhou Yang 4, Hongyu Zhang 5, Pan Zhou 1, Xuanhua Shi 1, Hai Jin 1, David Lo 6
Published on arXiv
2509.13755
Model Inversion Attack
OWASP ML Top 10 — ML03
Sensitive Information Disclosure
OWASP LLM Top 10 — LLM06
Key Finding
CodeEraser effectively erases targeted sensitive memorization (PII, credentials) while maintaining model utility across CodeParrot, CodeGen-Mono, and Qwen2.5-Coder without full retraining
CodeEraser
Novel technique introduced
While Code Language Models (CLMs) have demonstrated superior performance in software engineering tasks such as code generation and summarization, recent empirical studies reveal a critical privacy vulnerability: these models exhibit unintended memorization of sensitive training data, enabling verbatim reproduction of confidential information when specifically prompted. To address this issue, several approaches, including training data de-duplication and differential privacy augmentation, have been proposed. However, these methods require full-model retraining for deployed CLMs, which incurs substantial computational costs. In this paper, we aim to answer the following research question: Can sensitive information memorized by CLMs be erased effectively and efficiently? We conduct a pioneering investigation into erasing sensitive memorization in CLMs through machine unlearning - a post-hoc modification method that removes specific information from trained models without requiring full retraining. Specifically, we first quantify the memorization risks of sensitive data within CLM training datasets and curate a high-risk dataset of 50,000 sensitive memorized samples as unlearning targets. We study two widely used gradient ascent-based unlearning approaches: the vanilla and constraint-based methods, and introduce CodeEraser, an advanced variant that selectively unlearns sensitive memorized segments in code while preserving the structural integrity and functional correctness of the surrounding code. Extensive experiments on three families of CLMs, i.e., CodeParrot, CodeGen-Mono, and Qwen2.5-Coder, validate the effectiveness and efficiency of CodeEraser in erasing targeted sensitive memorization while maintaining model utility.
Key Contributions
- Quantifies memorization risks of sensitive data (PII, API keys, passwords) in CLM training corpora and curates a 50,000-sample high-risk unlearning target dataset
- Systematic study of gradient ascent-based unlearning approaches (vanilla and constraint-based) for erasing sensitive memorization in deployed CLMs
- Introduces CodeEraser, a selective unlearning method that targets only sensitive memorized segments while preserving surrounding code structure and functional correctness
🛡️ Threat Analysis
The core adversarial threat is an attacker querying CLMs to verbatim reconstruct private training data (PII, passwords, API keys). CodeEraser is a defense that prevents this reconstruction by erasing memorized sensitive segments — matching the ML03 adversary model of recovering training data from a trained model.