survey 2026

A Systematic Literature Review on LLM Defenses Against Prompt Injection and Jailbreaking: Expanding NIST Taxonomy

Pedro H. Barcha Correia 1, Ryan W. Achjian 1, Diego E. G. Caetano de Oliveira 1, Ygor Acacio Maria 1, Victor Takashi Hayashi 1, Marcos Lopes 1, Charles Christian Miers 2, Marcos A. Simplicio Jr 1

0 citations · arXiv

α

Published on arXiv

2601.22240

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Identifies and categorizes 88 prompt injection defense studies, extending NIST taxonomy and providing a practical catalog of mitigation strategies with reported effectiveness metrics


The rapid advancement and widespread adoption of generative artificial intelligence (GenAI) and large language models (LLMs) has been accompanied by the emergence of new security vulnerabilities and challenges, such as jailbreaking and other prompt injection attacks. These maliciously crafted inputs can exploit LLMs, causing data leaks, unauthorized actions, or compromised outputs, for instance. As both offensive and defensive prompt injection techniques evolve quickly, a structured understanding of mitigation strategies becomes increasingly important. To address that, this work presents the first systematic literature review on prompt injection mitigation strategies, comprehending 88 studies. Building upon NIST's report on adversarial machine learning, this work contributes to the field through several avenues. First, it identifies studies beyond those documented in NIST's report and other academic reviews and surveys. Second, we propose an extension to NIST taxonomy by introducing additional categories of defenses. Third, by adopting NIST's established terminology and taxonomy as a foundation, we promote consistency and enable future researchers to build upon the standardized taxonomy proposed in this work. Finally, we provide a comprehensive catalog of the reviewed prompt injection defenses, documenting their reported quantitative effectiveness across specific LLMs and attack datasets, while also indicating which solutions are open-source and model-agnostic. This catalog, together with the guidelines presented herein, aims to serve as a practical resource for researchers advancing the field of adversarial machine learning and for developers seeking to implement effective defenses in production systems.


Key Contributions

  • First systematic literature review on prompt injection mitigation strategies, covering 88 studies
  • Extension of NIST's adversarial ML taxonomy with additional defense categories specific to prompt injection and jailbreaking
  • Comprehensive catalog of reviewed defenses with quantitative effectiveness, open-source availability, and model-agnostic status

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
large language modelsgenerative ai systems