attack 2026

AST-PAC: AST-guided Membership Inference for Code

Roham Koohestani , Ali Al-Kaswan , Jonathan Katzy , Maliheh Izadi

0 citations · 23 references · arXiv (Cornell University)

α

Published on arXiv

2602.13240

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

AST-PAC improves membership inference AUC on syntactically large code files where PAC degrades, but under-mutates small files and underperforms on alphanumeric-rich code, motivating future size-adaptive calibration strategies.

AST-PAC

Novel technique introduced


Code Large Language Models are frequently trained on massive datasets containing restrictively licensed source code. This creates urgent data governance and copyright challenges. Membership Inference Attacks (MIAs) can serve as an auditing mechanism to detect unauthorized data usage in models. While attacks like the Loss Attack provide a baseline, more involved methods like Polarized Augment Calibration (PAC) remain underexplored in the code domain. This paper presents an exploratory study evaluating these methods on 3B--7B parameter code models. We find that while PAC generally outperforms the Loss baseline, its effectiveness relies on augmentation strategies that disregard the rigid syntax of code, leading to performance degradation on larger, complex files. To address this, we introduce AST-PAC, a domain-specific adaptation that utilizes Abstract Syntax Tree (AST) based perturbations to generate syntactically valid calibration samples. Preliminary results indicate that AST-PAC improves as syntactic size grows, where PAC degrades, but under-mutates small files and underperforms on alphanumeric-rich code. Overall, the findings motivate future work on syntax-aware and size-adaptive calibration as a prerequisite for reliable provenance auditing of code language models.


Key Contributions

  • Baseline comparison of Loss Attack vs. PAC on 3B–7B parameter code LLMs (Mellum-4B, StarCoder2-3B/7B, SmolLM3-3B)
  • Stratified robustness analysis showing PAC degrades on syntactically large/complex files due to syntax-unaware token-swap augmentation
  • AST-PAC: a syntax-aware calibration variant using Tree-Sitter AST-guided perturbations to produce valid calibration neighbors, improving MIA performance on large files

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is a membership inference attack (AST-PAC) that determines whether specific source code files were in a code LLM's training set. It evaluates Loss Attack and PAC baselines and proposes a syntax-aware calibration variant — all directly targeting the binary membership inference problem.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_time
Datasets
The Heap (Java subset)
Applications
code llmssource code copyright auditingtraining data provenance