defense 2026

AlignSentinel: Alignment-Aware Detection of Prompt Injection Attacks

Yuqi Jia 1, Reachal Wang 1, Xilong Wang 1, Chong Xiang 2, Neil Zhenqiang Gong 1

0 citations · 29 references · arXiv (Cornell University)

α

Published on arXiv

2602.13597

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

AlignSentinel accurately detects misaligned (injected) instructions and substantially outperforms baselines on both the new three-class benchmark and existing benchmarks where aligned-instruction inputs are largely absent.

AlignSentinel

Novel technique introduced


% Prompt injection attacks insert malicious instructions into an LLM's input to steer it toward an attacker-chosen task instead of the intended one. Existing detection defenses typically classify any input with instruction as malicious, leading to misclassification of benign inputs containing instructions that align with the intended task. In this work, we account for the instruction hierarchy and distinguish among three categories: inputs with misaligned instructions, inputs with aligned instructions, and non-instruction inputs. We introduce AlignSentinel, a three-class classifier that leverages features derived from LLM's attention maps to categorize inputs accordingly. To support evaluation, we construct the first systematic benchmark containing inputs from all three categories. Experiments on both our benchmark and existing ones--where inputs with aligned instructions are largely absent--show that AlignSentinel accurately detects inputs with misaligned instructions and substantially outperforms baselines.


Key Contributions

  • AlignSentinel: a three-class classifier using LLM attention map features to distinguish misaligned instructions (malicious), aligned instructions (benign), and non-instruction inputs
  • First systematic benchmark covering all three input categories to support rigorous evaluation of prompt injection detectors
  • Demonstrates that accounting for the instruction hierarchy substantially reduces false positives compared to binary injection detectors

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
AlignSentinel benchmark (authors' own)existing prompt injection benchmarks
Applications
llm chatbotsrag systemsllm-integrated applications