benchmark 2026

How Secure is Secure Code Generation? Adversarial Prompts Put LLM Defenses to the Test

Melissa Tessa , Iyiola E. Olatunji , Aicha War , Jacques Klein , Tegawendé F. Bissyandé

0 citations · 39 references · arXiv

α

Published on arXiv

2601.07084

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Under adversarial prompt perturbations, true secure-and-functional code generation rates collapse to 3–17%, with static analyzers overestimating security by 7–21x compared to joint evaluation


Recent secure code generation methods, using vulnerability-aware fine-tuning, prefix-tuning, and prompt optimization, claim to prevent LLMs from producing insecure code. However, their robustness under adversarial conditions remains untested, and current evaluations decouple security from functionality, potentially inflating reported gains. We present the first systematic adversarial audit of state-of-the-art secure code generation methods (SVEN, SafeCoder, PromSec). We subject them to realistic prompt perturbations such as paraphrasing, cue inversion, and context manipulation that developers might inadvertently introduce or adversaries deliberately exploit. To enable fair comparison, we evaluate all methods under consistent conditions, jointly assessing security and functionality using multiple analyzers and executable tests. Our findings reveal critical robustness gaps: static analyzers overestimate security by 7 to 21 times, with 37 to 60% of ``secure'' outputs being non-functional. Under adversarial conditions, true secure-and-functional rates collapse to 3 to 17%. Based on these findings, we propose best practices for building and evaluating robust secure code generation methods. Our code is available.


Key Contributions

  • First systematic adversarial audit of state-of-the-art secure code generation methods (SVEN, SafeCoder, PromSec) under realistic prompt perturbations including paraphrasing, cue inversion, and context manipulation
  • Reveals static analyzers overestimate security by 7–21x, with 37–60% of 'secure' outputs non-functional, and true secure-and-functional rates collapsing to 3–17% under adversarial conditions
  • Proposes a joint security+functionality evaluation framework and best practices for building and assessing robust secure code generation systems

🛡️ Threat Analysis


Details

Domains
nlpgenerative
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
secure code generationllm-based software development