Effective Code Membership Inference for Code Completion Models via Adversarial Prompts
Yuan Jiang 1, Zehao Li 1, Shan Huang 1, Christoph Treude 2, Xiaohong Su 1, Tiantian Wang 1
Published on arXiv
2511.15107
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
AdvPrompt-MIA achieves up to 102% AUC improvement over state-of-the-art membership inference baselines on Code Llama 7B with strong transferability across models and datasets.
AdvPrompt-MIA
Novel technique introduced
Membership inference attacks (MIAs) on code completion models offer an effective way to assess privacy risks by inferring whether a given code snippet was part of the training data. Existing black- and gray-box MIAs rely on expensive surrogate models or manually crafted heuristic rules, which limit their ability to capture the nuanced memorization patterns exhibited by over-parameterized code language models. To address these challenges, we propose AdvPrompt-MIA, a method specifically designed for code completion models, combining code-specific adversarial perturbations with deep learning. The core novelty of our method lies in designing a series of adversarial prompts that induce variations in the victim code model's output. By comparing these outputs with the ground-truth completion, we construct feature vectors to train a classifier that automatically distinguishes member from non-member samples. This design allows our method to capture richer memorization patterns and accurately infer training set membership. We conduct comprehensive evaluations on widely adopted models, such as Code Llama 7B, over the APPS and HumanEval benchmarks. The results show that our approach consistently outperforms state-of-the-art baselines, with AUC gains of up to 102%. In addition, our method exhibits strong transferability across different models and datasets, underscoring its practical utility and generalizability.
Key Contributions
- AdvPrompt-MIA: a membership inference attack that uses semantics-preserving adversarial code perturbations to probe memorization patterns in code LLMs
- Feature vector construction by comparing model outputs on adversarially perturbed prompts against ground-truth completions, then training a classifier for membership prediction
- Demonstrated up to 102% AUC improvement over SOTA baselines on Code Llama 7B across APPS and HumanEval benchmarks, with strong cross-model transferability
🛡️ Threat Analysis
Primary contribution is a membership inference attack (AdvPrompt-MIA) that determines whether specific code snippets were in the training data of code completion LLMs — the canonical ML04 binary membership question.