defense 2025

PromptSleuth: Detecting Prompt Injection via Semantic Intent Invariance

Mengxiao Wang , Yuxuan Zhang , Guofei Gu

0 citations

α

Published on arXiv

2508.20890

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

PromptSleuth consistently outperforms existing state-of-the-art prompt injection defenses across benchmarks while maintaining comparable runtime and cost efficiency, by exploiting the invariance of adversarial intent across attack variants.

PromptSleuth

Novel technique introduced


Large Language Models (LLMs) are increasingly integrated into real-world applications, from virtual assistants to autonomous agents. However, their flexibility also introduces new attack vectors-particularly Prompt Injection (PI), where adversaries manipulate model behavior through crafted inputs. As attackers continuously evolve with paraphrased, obfuscated, and even multi-task injection strategies, existing benchmarks are no longer sufficient to capture the full spectrum of emerging threats. To address this gap, we construct a new benchmark that systematically extends prior efforts. Our benchmark subsumes the two widely-used existing ones while introducing new manipulation techniques and multi-task scenarios, thereby providing a more comprehensive evaluation setting. We find that existing defenses, though effective on their original benchmarks, show clear weaknesses under our benchmark, underscoring the need for more robust solutions. Our key insight is that while attack forms may vary, the adversary's intent-injecting an unauthorized task-remains invariant. Building on this observation, we propose PromptSleuth, a semantic-oriented defense framework that detects prompt injection by reasoning over task-level intent rather than surface features. Evaluated across state-of-the-art benchmarks, PromptSleuth consistently outperforms existing defense while maintaining comparable runtime and cost efficiency. These results demonstrate that intent-based semantic reasoning offers a robust, efficient, and generalizable strategy for defending LLMs against evolving prompt injection threats.


Key Contributions

  • A new prompt injection benchmark that subsumes two widely-used existing benchmarks while adding diverse manipulation techniques (paraphrase, obfuscation) and multi-task injection scenarios
  • PromptSleuth, a semantic-oriented defense that detects prompt injection by reasoning over invariant task-level adversarial intent rather than surface-level syntactic patterns
  • Empirical demonstration that existing defenses degrade significantly on the new benchmark while PromptSleuth consistently outperforms them with comparable runtime and cost

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm virtual assistantsautonomous llm agentsllm-integrated applications