benchmark 2026

Defenses Against Prompt Attacks Learn Surface Heuristics

Shawn Li 1, Chenxiao Yu 1, Zhiyu Ni 2, Hao Li 3, Charith Peris 4, Chaowei Xiao 5, Yue Zhao 1

0 citations · 25 references · arXiv

α

Published on arXiv

2601.07185

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SFT-based prompt injection defenses (StrucQ, SecAlign) rely on surface patterns rather than malicious intent: suffix-task rejection rises from below 10% to 90%, a single trigger token increases false refusals by up to 50%, and topic generalization failures cause accuracy drops of up to 40%.

MEval

Novel technique introduced


Large language models (LLMs) are increasingly deployed in security-sensitive applications, where they must follow system- or developer-specified instructions that define the intended task behavior, while completing benign user requests. When adversarial instructions appear in user queries or externally retrieved content, models may override intended logic. Recent defenses rely on supervised fine-tuning with benign and malicious labels. Although these methods achieve high attack rejection rates, we find that they rely on narrow correlations in defense data rather than harmful intent, leading to systematic rejection of safe inputs. We analyze three recurring shortcut behaviors induced by defense fine-tuning. \emph{Position bias} arises when benign content placed later in a prompt is rejected at much higher rates; across reasoning benchmarks, suffix-task rejection rises from below \textbf{10\%} to as high as \textbf{90\%}. \emph{Token trigger bias} occurs when strings common in attack data raise rejection probability even in benign contexts; inserting a single trigger token increases false refusals by up to \textbf{50\%}. \emph{Topic generalization bias} reflects poor generalization beyond the defense data distribution, with defended models suffering test-time accuracy drops of up to \textbf{40\%}. These findings suggest that current prompt-injection defenses frequently respond to attack-like surface patterns rather than the underlying intent. We introduce controlled diagnostic datasets and a systematic evaluation across two base models and multiple defense pipelines, highlighting limitations of supervised fine-tuning for reliable LLM security.


Key Contributions

  • Identifies and formalizes three shortcut failure modes in SFT-based prompt injection defenses: position bias (suffix-task rejection rises from <10% to 90%), token trigger bias (single token increases false refusals by up to 50%), and topic generalization bias (up to 40% accuracy drops on unseen domains).
  • Introduces controlled diagnostic datasets that isolate each bias factor while holding semantic intent constant, enabling systematic evaluation across multiple base models and defense pipelines.
  • Demonstrates that high attack rejection rates on standard benchmarks do not guarantee robust security behavior, calling for a revised evaluation protocol that prioritizes decision validity and generalization.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_time
Datasets
controlled diagnostic datasets (constructed by authors)reasoning benchmarks (unspecified in excerpt)
Applications
llm prompt injection defense evaluationretrieval-augmented generationllm agent security