attack 2025

Takedown: How It's Done in Modern Coding Agent Exploits

Eunkyu Lee , Donghyeon Kim , Wonyoung Kim , Insu Yun

3 citations · 67 references · arXiv

α

Published on arXiv

2509.24240

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Achieved arbitrary command execution in 5 of 8 and global data exfiltration in 4 of 8 real-world coding agents by chaining 15 identified security issues, all without any user interaction

Takedown

Novel technique introduced


Coding agents, which are LLM-driven agents specialized in software development, have become increasingly prevalent in modern programming environments. Unlike traditional AI coding assistants, which offer simple code completion and suggestions, modern coding agents tackle more complex tasks with greater autonomy, such as generating entire programs from natural language instructions. To enable such capabilities, modern coding agents incorporate extensive functionalities, which in turn raise significant concerns over their security and privacy. Despite their growing adoption, systematic and in-depth security analysis of these agents has largely been overlooked. In this paper, we present a comprehensive security analysis of eight real-world coding agents. Our analysis addresses the limitations of prior approaches, which were often fragmented and ad hoc, by systematically examining the internal workflows of coding agents and identifying security threats across their components. Through the analysis, we identify 15 security issues, including previously overlooked or missed issues, that can be abused to compromise the confidentiality and integrity of user systems. Furthermore, we show that these security issues are not merely individual vulnerabilities, but can collectively lead to end-to-end exploitations. By leveraging these security issues, we successfully achieved arbitrary command execution in five agents and global data exfiltration in four agents, all without any user interaction or approval. Our findings highlight the need for a comprehensive security analysis in modern LLM-driven agents and demonstrate how insufficient security considerations can lead to severe vulnerabilities.


Key Contributions

  • Systematic security analysis of 8 real-world coding agents identifying 15 security issues spanning tool calls, file tools, terminal tools, and agent workflow logic
  • End-to-end exploit demonstrations achieving arbitrary command execution in 5 agents and global data exfiltration in 4 agents, all without user interaction or approval
  • Taxonomy of coding-agent-specific attack vectors showing how individually minor security issues chain into severe full-system compromises

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
llm coding agentsautonomous software development agents