attack 2025

Uncovering Pretraining Code in LLMs: A Syntax-Aware Attribution Approach

Yuanheng Li 1, Zhuoyang Chen 1, Xiaoyun Liu 1, Yuhao Wang 1, Mingwei Liu 2, Yang Shi 1, Kaifeng Huang 1, Shengjie Zhao 1

0 citations · 34 references · arXiv

α

Published on arXiv

2511.07033

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

SynPrune consistently outperforms state-of-the-art membership inference methods on code by leveraging Python syntax structure to filter non-authorship tokens from membership scoring

SynPrune

Novel technique introduced


As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training data have grown, especially in the context of code. Open-source code, often protected by open source licenses (e.g, GPL), poses legal and ethical challenges when used in pretraining. Detecting whether specific code samples were included in LLM training data is thus critical for transparency, accountability, and copyright compliance. We propose SynPrune, a syntax-pruned membership inference attack method tailored for code. Unlike prior MIA approaches that treat code as plain text, SynPrune leverages the structured and rule-governed nature of programming languages. Specifically, it identifies and excludes consequent tokens that are syntactically required and not reflective of authorship, from attribution when computing membership scores. Experimental results show that SynPrune consistently outperforms the state-of-the-arts. Our method is also robust across varying function lengths and syntax categories.


Key Contributions

  • SynPrune: a syntax-aware MIA that identifies and excludes syntactically-obligatory (consequent) tokens in Python code before computing membership scores, focusing attribution on authorship-reflective tokens
  • Empirical demonstration that treating code as plain text inflates noise in MIA scores, and that leveraging programming language grammar improves detection accuracy
  • Robustness analysis showing consistent gains across varying function lengths and syntax categories

🛡️ Threat Analysis

Membership Inference Attack

SynPrune is a membership inference attack: its core purpose is determining whether specific code samples were present in an LLM's training set — the canonical binary membership question. It improves MIA scores by excluding syntactically required (low-entropy) tokens from the membership signal computation.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_time
Applications
llm pretraining data attributioncode copyright detectionopen-source license compliance auditing