tool 2025

CourtGuard: A Local, Multiagent Prompt Injection Classifier

Isaac Wu , Michael Maslowski

0 citations · 22 references · arXiv

α

Published on arXiv

2510.19844

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

CourtGuard achieves a lower false positive rate than the Direct Detector (LLM-as-a-judge) baseline but is generally a worse overall prompt injection detector, demonstrating a precision-recall tradeoff in multiagent classification.

CourtGuard

Novel technique introduced


As large language models (LLMs) become integrated into various sensitive applications, prompt injection, the use of prompting to induce harmful behaviors from LLMs, poses an ever increasing risk. Prompt injection attacks can cause LLMs to leak sensitive data, spread misinformation, and exhibit harmful behaviors. To defend against these attacks, we propose CourtGuard, a locally-runnable, multiagent prompt injection classifier. In it, prompts are evaluated in a court-like multiagent LLM system, where a "defense attorney" model argues the prompt is benign, a "prosecution attorney" model argues the prompt is a prompt injection, and a "judge" model gives the final classification. CourtGuard has a lower false positive rate than the Direct Detector, an LLM as-a-judge. However, CourtGuard is generally a worse prompt injection detector. Nevertheless, this lower false positive rate highlights the importance of considering both adversarial and benign scenarios for the classification of a prompt. Additionally, the relative performance of CourtGuard in comparison to other prompt injection classifiers advances the use of multiagent systems as a defense against prompt injection attacks. The implementations of CourtGuard and the Direct Detector with full prompts for Gemma-3-12b-it, Llama-3.3-8B, and Phi-4-mini-instruct are available at https://github.com/isaacwu2000/CourtGuard.


Key Contributions

  • CourtGuard: a locally-runnable multiagent prompt injection classifier using a court metaphor (defense attorney, prosecution attorney, judge LLMs) that reduces false positives compared to single LLM-as-a-judge baselines
  • Evaluation across three datasets (LLMail-Inject, NotInject, Qualifire Benchmark) covering both real-world indirect injection attacks and benign prompts with trigger words
  • Open-source implementations for Gemma-3-12b-it, Llama-3.3-8B, and Phi-4-mini-instruct supporting local deployment for sensitive enterprise use cases

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Datasets
LLMail-InjectNotInjectQualifire Prompt Injection Benchmark
Applications
llm applicationsprompt injection detectionrag applicationsenterprise sensitive data handling