Latest papers

1 papers
benchmark arXiv Feb 18, 2026 · 6w ago

Can Adversarial Code Comments Fool AI Security Reviewers -- Large-Scale Empirical Study of Comment-Based Attacks and Defenses Against LLM Code Analysis

Scott Thornton · Perfecxion.ai

Benchmark study finds adversarial code comments fail to meaningfully fool LLM vulnerability detectors across eight frontier models in 14,012 trials

Prompt Injection nlp
PDF