Latest papers

2 papers
attack arXiv Dec 26, 2025 · Dec 2025

Analyzing Code Injection Attacks on LLM-based Multi-Agent Systems in Software Development

Brian Bowers, Smita Khapre, Jugal Kalita · Loyola Marymount University · University of Colorado Colorado Springs

Demonstrates code injection and few-shot poisoning attacks on LLM multi-agent software dev systems, bypassing security agents at 71.95% success rate

Prompt Injection Excessive Agency nlpgenerative
PDF
attack arXiv Dec 22, 2025 · Dec 2025

Semantically-Equivalent Transformations-Based Backdoor Attacks against Neural Code Models: Characterization and Mitigation

Junyao Ye, Zhen Li, Xi Tang et al. · Huazhong University of Science and Technology · University of Colorado Colorado Springs

Backdoor attacks on neural code models using semantics-preserving code transformations as stealthy triggers, achieving 90%+ success while evading defenses

Model Poisoning nlp
PDF