benchmark 2025

Signature in Code Backdoor Detection, how far are we?

Quoc Hung Le 1, Thanh Le-Cong 2, Bach Le 2, Bowen Xu 1

0 citations · 39 references · arXiv

α

Published on arXiv

2510.13992

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Widely used Spectral Signature configurations for code backdoor detection are suboptimal, and the proposed NPV proxy metric provides more accurate defense performance estimation without model retraining.

Negative Predictions Value (NPV)

Novel technique introduced


As Large Language Models (LLMs) become increasingly integrated into software development workflows, they also become prime targets for adversarial attacks. Among these, backdoor attacks are a significant threat, allowing attackers to manipulate model outputs through hidden triggers embedded in training data. Detecting such backdoors remains a challenge, and one promising approach is the use of Spectral Signature defense methods that identify poisoned data by analyzing feature representations through eigenvectors. While some prior works have explored Spectral Signatures for backdoor detection in neural networks, recent studies suggest that these methods may not be optimally effective for code models. In this paper, we revisit the applicability of Spectral Signature-based defenses in the context of backdoor attacks on code models. We systematically evaluate their effectiveness under various attack scenarios and defense configurations, analyzing their strengths and limitations. We found that the widely used setting of Spectral Signature in code backdoor detection is often suboptimal. Hence, we explored the impact of different settings of the key factors. We discovered a new proxy metric that can more accurately estimate the actual performance of Spectral Signature without model retraining after the defense.


Key Contributions

  • Systematic evaluation of Spectral Signature defense configurations for code backdoor detection, demonstrating that widely used configurations are often suboptimal
  • Analysis of key factors (poisoning rate, removal ratio) that determine optimal Spectral Signature configurations under different attack scenarios
  • Discovery of Negative Predictions Value (NPV) as a proxy metric that more accurately estimates Spectral Signature defense performance without requiring model retraining

🛡️ Threat Analysis

Model Poisoning

Paper focuses on detecting and evaluating defenses against backdoor/trojan attacks on code models, where adversaries embed hidden triggers via training data poisoning to produce attacker-specified outputs.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timetargeted
Applications
code generationbug/vulnerability detectionprogram repair