MPIB: A Benchmark for Medical Prompt Injection Attacks and Clinical Safety in LLMs
Junhyeok Lee 1, Han Jang 2, Kyu Sung Choi 1,3
Published on arXiv
2602.06268
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ASR and CHER diverge substantially across LLMs and defense configurations, indicating instruction compliance does not reliably predict downstream patient harm severity
MPIB (Medical Prompt Injection Benchmark)
Novel technique introduced
Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems are increasingly integrated into clinical workflows; however, prompt injection attacks can steer these systems toward clinically unsafe or misleading outputs. We introduce the Medical Prompt Injection Benchmark (MPIB), a dataset-and-benchmark suite for evaluating clinical safety under both direct prompt injection and indirect, RAG-mediated injection across clinically grounded tasks. MPIB emphasizes outcome-level risk via the Clinical Harm Event Rate (CHER), which measures high-severity clinical harm events under a clinically grounded taxonomy, and reports CHER alongside Attack Success Rate (ASR) to disentangle instruction compliance from downstream patient risk. The benchmark comprises 9,697 curated instances constructed through multi-stage quality gates and clinical safety linting. Evaluating MPIB across a diverse set of baseline LLMs and defense configurations, we find that ASR and CHER can diverge substantially, and that robustness depends critically on whether adversarial instructions appear in the user query or in retrieved context. We release MPIB with evaluation code, adversarial baselines, and comprehensive documentation to support reproducible and systematic research on clinical prompt injection. Code and data are available at GitHub (code) and Hugging Face (data).
Key Contributions
- Introduces MPIB, a 9,697-instance curated benchmark for clinical prompt injection covering both direct and indirect (RAG-mediated) injection scenarios
- Proposes Clinical Harm Event Rate (CHER) as a clinically grounded severity metric that can diverge substantially from Attack Success Rate (ASR)
- Evaluates a diverse set of baseline LLMs and defense configurations, finding robustness depends critically on whether adversarial instructions appear in the user query or retrieved context