defense 2025

LLM Jailbreak Detection for (Almost) Free!

Guorui Chen 1, Yifan Xia 1, Xiaojun Jia 2, Zhijiang Li 1, Philip Torr 3, Jindong Gu 3

0 citations

α

Published on arXiv

2509.14558

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

FJD detects jailbreak prompts with near-zero additional computational overhead while significantly outperforming prior detection baselines (perplexity filters, SmoothLLM, AutoDefense) across GCG, AutoDAN, Cipher, and other attacks on Vicuna, Llama2, Guanaco, Llama3, and ChatGPT3.5.

FJD (Free Jailbreak Detection)

Novel technique introduced


Large language models (LLMs) enhance security through alignment when widely used, but remain susceptible to jailbreak attacks capable of producing inappropriate content. Jailbreak detection methods show promise in mitigating jailbreak attacks through the assistance of other models or multiple model inferences. However, existing methods entail significant computational costs. In this paper, we first present a finding that the difference in output distributions between jailbreak and benign prompts can be employed for detecting jailbreak prompts. Based on this finding, we propose a Free Jailbreak Detection (FJD) which prepends an affirmative instruction to the input and scales the logits by temperature to further distinguish between jailbreak and benign prompts through the confidence of the first token. Furthermore, we enhance the detection performance of FJD through the integration of virtual instruction learning. Extensive experiments on aligned LLMs show that our FJD can effectively detect jailbreak prompts with almost no additional computational costs during LLM inference.


Key Contributions

  • Empirical finding that jailbreak prompts produce lower first-token confidence than benign prompts, enabling a lightweight detection signal.
  • FJD method combining Affirmative Instruction Prepending and Temperature Scaling to amplify confidence gaps at virtually zero additional inference cost.
  • FJD-LI extension that learns virtual instructions to further improve detection performance across diverse jailbreak types.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timewhite_boxblack_box
Datasets
AdvBench
Applications
llm safetyjailbreak detectioncontent moderation