attack 2025

Semantically-Equivalent Transformations-Based Backdoor Attacks against Neural Code Models: Characterization and Mitigation

Junyao Ye 1, Zhen Li 1, Xi Tang 1, Shouhuai Xu 2, Deqing Zou 1, Zhongsheng Yuan 1

0 citations · arXiv

α

Published on arXiv

2512.19215

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

SET-based backdoor attacks achieve attack success rates often exceeding 90% while evading state-of-the-art defenses with detection rates averaging 25.13% lower than injection-based attacks across CodeBERT, CodeT5, and StarCoder.

SET-based Backdoor Attack

Novel technique introduced


Neural code models have been increasingly incorporated into software development processes. However, their susceptibility to backdoor attacks presents a significant security risk. The state-of-the-art understanding focuses on injection-based attacks, which insert anomalous patterns into software code. These attacks can be neutralized by standard sanitization techniques. This status quo may lead to a false sense of security regarding backdoor attacks. In this paper, we introduce a new kind of backdoor attacks, dubbed Semantically-Equivalent Transformation (SET)-based backdoor attacks, which use semantics-preserving low-prevalence code transformations to generate stealthy triggers. We propose a framework to guide the generation of such triggers. Our experiments across five tasks, six languages, and models like CodeBERT, CodeT5, and StarCoder show that SET-based attacks achieve high success rates (often >90%) while preserving model utility. The attack proves highly stealthy, evading state-of-the-art defenses with detection rates on average over 25.13% lower than injection-based counterparts. We evaluate normalization-based countermeasures and find they offer only partial mitigation, confirming the attack's robustness. These results motivate further investigation into scalable defenses tailored to SET-based attacks.


Key Contributions

  • Introduces SET-based backdoor attacks using semantics-preserving, low-prevalence code transformations as stealthy triggers against neural code models across five tasks and six programming languages.
  • Proposes a framework for systematically generating SET-based triggers and demonstrates >90% attack success rates with high model utility preservation.
  • Shows SET-based attacks evade state-of-the-art defenses with detection rates averaging 25.13% lower than injection-based attacks, and that normalization-based countermeasures provide only partial mitigation.

🛡️ Threat Analysis

Model Poisoning

Proposes trigger-based backdoor attacks against neural code models (CodeBERT, CodeT5, StarCoder) where semantically-equivalent code transformations serve as hidden triggers that activate targeted malicious behavior while the model behaves normally on clean inputs; also evaluates defenses against these backdoors.


Details

Domains
nlp
Model Types
transformerllm
Threat Tags
training_timetargeteddigital
Applications
code completioncode summarizationcode generationneural code models