defense 2025

SecInfer: Preventing Prompt Injection via Inference-time Scaling

Yupei Liu 1, Yanting Wang 1, Yuqi Jia 2, Jinyuan Jia 1, Neil Zhenqiang Gong 2

3 citations · 1 influential · 68 references · arXiv

α

Published on arXiv

2509.24967

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SecInfer outperforms state-of-the-art defenses and existing inference-time scaling approaches against both existing and adaptive prompt injection attacks.

SecInfer

Novel technique introduced


Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses typically rely on fine-tuning an LLM to enhance its security, but they achieve limited effectiveness against strong attacks. In this work, we propose \emph{SecInfer}, a novel defense against prompt injection attacks built on \emph{inference-time scaling}, an emerging paradigm that boosts LLM capability by allocating more compute resources for reasoning during inference. SecInfer consists of two key steps: \emph{system-prompt-guided sampling}, which generates multiple responses for a given input by exploring diverse reasoning paths through a varied set of system prompts, and \emph{target-task-guided aggregation}, which selects the response most likely to accomplish the intended task. Extensive experiments show that, by leveraging additional compute at inference, SecInfer effectively mitigates both existing and adaptive prompt injection attacks, outperforming state-of-the-art defenses as well as existing inference-time scaling approaches.


Key Contributions

  • SecInfer defense framework that leverages inference-time scaling to mitigate prompt injection attacks without fine-tuning
  • System-prompt-guided sampling that generates diverse candidate responses by varying system prompts to explore different reasoning paths
  • Target-task-guided aggregation that selects the response most aligned with the intended task, neutralizing injected instructions

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm securityprompt injection defense