defense 2026

Detecting Data Poisoning in Code Generation LLMs via Black-Box, Vulnerability-Oriented Scanning

Shenao Yan 1, Shimaa Ahmed 2, Shan Jin 2, Sunpreet S. Arora 2, Yiwei Cai 2, Yizhen Wang 2, Yuan Hong 1

0 citations

α

Published on arXiv

2603.17174

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Training Data Poisoning

OWASP LLM Top 10 — LLM03

Key Finding

Achieves 97%+ detection accuracy against four representative attacks across three vulnerability classes with substantially lower false positives than prior methods

CodeScan

Novel technique introduced


Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these models are vulnerable to backdoor and poisoning attacks that induce the generation of insecure code, yet effective defenses remain limited. Existing scanning approaches rely on token-level generation consistency to invert attack targets, which is ineffective for source code where identical semantics can appear in diverse syntactic forms. We present CodeScan, which, to the best of our knowledge, is the first poisoning-scanning framework tailored to code generation models. CodeScan identifies attack targets by analyzing structural similarities across multiple generations conditioned on different clean prompts. It combines iterative divergence analysis with abstract syntax tree (AST)-based normalization to abstract away surface-level variation and unify semantically equivalent code, isolating structures that recur consistently across generations. CodeScan then applies LLM-based vulnerability analysis to determine whether the extracted structures contain security vulnerabilities and flags the model as compromised when such a structure is found. We evaluate CodeScan against four representative attacks under both backdoor and poisoning settings across three real-world vulnerability classes. Experiments on 108 models spanning three architectures and multiple model sizes demonstrate 97%+ detection accuracy with substantially lower false positives than prior methods.


Key Contributions

  • First poisoning-scanning framework tailored specifically to code generation LLMs
  • AST-based normalization to abstract syntactic variation and identify semantically equivalent vulnerable code structures
  • LLM-based vulnerability analysis integrated with iterative divergence analysis to detect poisoned models

🛡️ Threat Analysis

Data Poisoning Attack

Defends against data poisoning attacks on code generation LLMs where training data is corrupted to induce generation of insecure code.

Model Poisoning

Also addresses backdoor attacks that embed hidden malicious behavior (vulnerable code generation) triggered by specific contexts.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxtraining_time
Datasets
108 models across three architectures
Applications
code generationsoftware development assistance