Cybersecurity AI: Hacking the AI Hackers via Prompt Injection
Víctor Mayoral-Vilches 1, Per Mannermaa Rynning 2
Published on arXiv
2508.21669
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
Proof-of-concept prompt injection achieves 100% exploitation of the CAI security agent framework; a four-layer defense reduces attack success to 0% while maintaining operational efficiency.
We demonstrate how AI-powered cybersecurity tools can be turned against themselves through prompt injection attacks. Prompt injection is reminiscent of cross-site scripting (XSS): malicious text is hidden within seemingly trusted content, and when the system processes it, that text is transformed into unintended instructions. When AI agents designed to find and exploit vulnerabilities interact with malicious web servers, carefully crafted reponses can hijack their execution flow, potentially granting attackers system access. We present proof-of-concept exploits against the Cybersecurity AI (CAI) framework and its CLI tool, and detail our mitigations against such attacks in a multi-layered defense implementation. Our findings indicate that prompt injection is a recurring and systemic issue in LLM-based architectures, one that will require dedicated work to address, much as the security community has had to do with XSS in traditional web applications.
Key Contributions
- First empirical evaluation of prompt injection targeting AI-powered security tools, demonstrating 100% exploitation success across 14 attack variants with time-to-compromise metrics
- Novel 7-category taxonomy of prompt injection attacks including Unicode homograph exploitation and multi-layer encoding schemes against the CAI framework
- Four-layer defense architecture achieving complete mitigation (0% attack success) with <12ms latency and <0.1% false positives, deployed in the CAI framework