Enhancing and Reporting Robustness Boundary of Neural Code Models for Intelligent Code Understanding
Tingxu Han 1, Wei Song 2, Weisong Sun 3, Hao Wu 4, Chunrong Fang 1, Yuan Xiao 1, Xiaofang Zhang 4, Zhenyu Chen 1, Yang Liu 3
Published on arXiv
2603.24119
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves average certified robustness radius of 1.63 identifiers while reducing ASR from 42.43% to 9.74% with 0.29% accuracy drop
ENBECOME
Novel technique introduced
With the development of deep learning, Neural Code Models (NCMs) such as CodeBERT and CodeLlama are widely used for code understanding tasks, including defect detection and code classification. However, recent studies have revealed that NCMs are vulnerable to adversarial examples, inputs with subtle perturbations that induce incorrect predictions while remaining difficult to detect. Existing defenses address this issue via data augmentation to empirically improve robustness, but they are costly, offer no theoretical robustness guarantees, and typically require white-box access to model internals, such as gradients. To address the above challenges, we propose ENBECOME, a novel black-box training-free and lightweight adversarial defense. ENBECOME is designed to both enhance empirical robustness and report certified robustness boundaries for NCMs. ENBECOME operates solely during inference, introducing random, semantics-preserving perturbations to input code snippets to smooth the NCM's decision boundaries. This smoothing enables ENBECOME to formally certify a robustness radius within which adversarial examples can never induce misclassification, a property known as certified robustness. We conduct comprehensive experiments across multiple NCM architectures and tasks. Results show that ENBECOME significantly reduces attack success rates while maintaining high accuracy. For example, in defect detection, it reduces the average ASR from 42.43% to 9.74% with only a 0.29% drop in accuracy. Results show that ENBECOME significantly reduces attack success rates while maintaining high accuracy. For example, in defect detection, it reduces the average ASR from 42.43% to 9.74% with only a 0.29% drop in accuracy. Furthermore, ENBECOME achieves an average certified robustness radius of 1.63, meaning that adversarial modifications to no more than 1.63 identifiers are provably ineffective.
Key Contributions
- Black-box training-free adversarial defense for neural code models using randomized smoothing
- Certified robustness guarantees with formal bounds on adversarial perturbation radius
- Reduces attack success rate from 42.43% to 9.74% on defect detection with only 0.29% accuracy drop
🛡️ Threat Analysis
Paper defends against adversarial examples on neural code models—inputs with subtle perturbations (identifier renaming) that cause misclassification at inference time. ENBECOME uses randomized smoothing to certify robustness against such evasion attacks.