defense 2025

PIShield: Detecting Prompt Injection Attacks via Intrinsic LLM Features

Wei Zou 1, Yupei Liu 1, Yanting Wang 1, Ying Chen 1, Neil Zhenqiang Gong 2, Jinyuan Jia 1

0 citations · 81 references · arXiv

α

Published on arXiv

2510.14005

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PIShield consistently achieves low false positive and false negative rates across diverse benchmarks, significantly outperforming existing prompt injection detection baselines using only internal LLM representations.

PIShield

Novel technique introduced


LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious instructions, causing the LLM to follow the attacker's intent instead of the original user's. Existing prompt injection detection methods often have sub-optimal performance and/or high computational overhead. In this work, we propose PIShield, an effective and efficient detection method based on the observation that instruction-tuned LLMs internally encode distinguishable signals for prompts containing injected instructions. PIShield leverages residual-stream representations and a simple linear classifier to detect prompt injection, without expensive model fine-tuning or response generation. We conduct extensive evaluations on a diverse set of short- and long-context benchmarks. The results show that PIShield consistently achieves low false positive and false negative rates, significantly outperforming existing baselines. These findings demonstrate that internal representations of instruction-tuned LLMs provide a powerful and practical foundation for prompt injection detection in real-world applications.


Key Contributions

  • Observation that instruction-tuned LLMs internally encode distinguishable residual-stream signals when prompts contain injected instructions
  • PIShield: a lightweight linear classifier over residual-stream representations that detects prompt injection without model fine-tuning or response generation
  • Comprehensive evaluation across diverse short- and long-context benchmarks demonstrating low FP/FN rates and significant improvement over existing baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm-integrated applicationsprompt injection detection