benchmark 2026

Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis

Scott Thornton

0 citations · arXiv (Cornell University)

α

Published on arXiv

2602.16741

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversarial code comments produce non-significant detection rate changes of −5% to +4% across all eight LLMs (McNemar p > 0.21); SAST cross-referencing is the most effective defense at 96.9% detection with 47% recovery of baseline misses


AI-assisted code review is widely used to detect vulnerabilities before production release. Prior work shows that adversarial prompt manipulation can degrade large language model (LLM) performance in code generation. We test whether similar comment-based manipulation misleads LLMs during vulnerability detection. We build a 100-sample benchmark across Python, JavaScript, and Java, each paired with eight comment variants ranging from no comments to adversarial strategies such as authority spoofing and technical deception. Eight frontier models, five commercial and three open-source, are evaluated in 9,366 trials. Adversarial comments produce small, statistically non-significant effects on detection accuracy (McNemar exact p > 0.21; all 95 percent confidence intervals include zero). This holds for commercial models with 89 to 96 percent baseline detection and open-source models with 53 to 72 percent, despite large absolute performance gaps. Unlike generation settings where comment manipulation achieves high attack success, detection performance does not meaningfully degrade. More complex adversarial strategies offer no advantage over simple manipulative comments. We test four automated defenses across 4,646 additional trials (14,012 total). Static analysis cross-referencing performs best at 96.9 percent detection and recovers 47 percent of baseline misses. Comment stripping reduces detection for weaker models by removing helpful context. Failures concentrate on inherently difficult vulnerability classes, including race conditions, timing side channels, and complex authorization logic, rather than on adversarial comments.


Key Contributions

  • 100-sample benchmark of vulnerable code across Python, JavaScript, and Java paired with 8 adversarial comment variants (authority spoofing, attention dilution, technical deception, etc.), evaluated across 8 frontier LLMs in 9,366 trials
  • Empirical finding that adversarial code comments produce statistically non-significant effects on LLM vulnerability detection (McNemar exact p > 0.21, all 95% CIs spanning zero), revealing a sharp asymmetry vs. code-generation settings where comment attacks achieve 75–100% success
  • Evaluation of four automated defenses showing SAST cross-referencing achieves 96.9% detection and 47% recovery of baseline misses, while comment stripping actually degrades weaker models by removing helpful context

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
custom 100-sample vulnerable code benchmark (Python, JavaScript, Java)
Applications
ai code reviewvulnerability detectionstatic analysis