ALERT: Zero-shot LLM Jailbreak Detection via Internal Discrepancy Amplification
Xiao Lin 1, Philip Li 1, Zhichen Zeng 1, Tingwei Li 1, Tianxin Wei 1, Xuying Ning 1, Gaotang Li 1, Yuzhong Chen 2, Hanghang Tong 1
Published on arXiv
2601.03600
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Outperforms the second-best baseline by at least 10% in average Accuracy and F1-score — and up to 40% — across all datasets and attack strategies in zero-shot settings.
ALERT (Amplification-based Jailbreak Detector)
Novel technique introduced
Despite rich safety alignment strategies, large language models (LLMs) remain highly susceptible to jailbreak attacks, which compromise safety guardrails and pose serious security risks. Existing detection methods mainly detect jailbreak status relying on jailbreak templates present in the training data. However, few studies address the more realistic and challenging zero-shot jailbreak detection setting, where no jailbreak templates are available during training. This setting better reflects real-world scenarios where new attacks continually emerge and evolve. To address this challenge, we propose a layer-wise, module-wise, and token-wise amplification framework that progressively magnifies internal feature discrepancies between benign and jailbreak prompts. We uncover safety-relevant layers, identify specific modules that inherently encode zero-shot discriminative signals, and localize informative safety tokens. Building upon these insights, we introduce ALERT (Amplification-based Jailbreak Detector), an efficient and effective zero-shot jailbreak detector that introduces two independent yet complementary classifiers on amplified representations. Extensive experiments on three safety benchmarks demonstrate that ALERT achieves consistently strong zero-shot detection performance. Specifically, (i) across all datasets and attack strategies, ALERT reliably ranks among the top two methods, and (ii) it outperforms the second-best baseline by at least 10% in average Accuracy and F1-score, and sometimes by up to 40%.
Key Contributions
- Layer-wise, module-wise, and token-wise amplification framework that magnifies internal feature discrepancies between benign and jailbreak prompts without requiring jailbreak training examples
- ALERT detector with two independent yet complementary classifiers built on amplified internal LLM representations for zero-shot jailbreak detection
- Empirical identification of safety-relevant layers, modules, and tokens that encode discriminative signals exploitable for zero-shot detection