Beyond the Benchmark: Innovative Defenses Against Prompt Injection Attacks
Safwan Shaheer , G.M. Refatul Islam , Mohammad Rafid Hamid , Tahsin Zaman Jilan
Published on arXiv
2512.16307
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Proposed iterative CoT-seeded defense framework significantly reduces prompt injection success rates and false detection rates on LLaMA family models.
In this fast-evolving area of LLMs, our paper discusses the significant security risk presented by prompt injection attacks. It focuses on small open-sourced models, specifically the LLaMA family of models. We introduce novel defense mechanisms capable of generating automatic defenses and systematically evaluate said generated defenses against a comprehensive set of benchmarked attacks. Thus, we empirically demonstrated the improvement proposed by our approach in mitigating goal-hijacking vulnerabilities in LLMs. Our work recognizes the increasing relevance of small open-sourced LLMs and their potential for broad deployments on edge devices, aligning with future trends in LLM applications. We contribute to the greater ecosystem of open-source LLMs and their security in the following: (1) assessing present prompt-based defenses against the latest attacks, (2) introducing a new framework using a seed defense (Chain Of Thoughts) to refine the defense prompts iteratively, and (3) showing significant improvements in detecting goal hijacking attacks. Out strategies significantly reduce the success rates of the attacks and false detection rates while at the same time effectively detecting goal-hijacking capabilities, paving the way for more secure and efficient deployments of small and open-source LLMs in resource-constrained environments.
Key Contributions
- Systematic evaluation of existing prompt-based defenses against state-of-the-art goal-hijacking attacks on LLaMA models
- Novel iterative defense prompt generation framework seeded with Chain-of-Thought prompts and refined using a larger model's in-context feedback loop
- Empirical demonstration of significantly reduced attack success rates and false detection rates on small open-source LLMs targeting edge deployments