defense 2026

SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

Zikai Zhang 1, Rui Hu 1, Olivera Kotevska 2, Jiahao Xu 1

0 citations

α

Published on arXiv

2604.01473

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 22.66% reduction in attack success rate (ASR) on LLaMA-3-8B while maintaining 173x lower memory overhead and 26x lower latency than baselines

SelfGrader

Novel technique introduced


Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing guardrail methods typically rely on internal features or textual responses to detect malicious queries, which either introduce substantial latency or suffer from the randomness in text generation. To overcome these limitations, we propose SelfGrader, a lightweight guardrail method that formulates jailbreak detection as a numerical grading problem using token-level logits. Specifically, SelfGrader evaluates the safety of a user query within a compact set of numerical tokens (NTs) (e.g., 0-9) and interprets their logit distribution as an internal safety signal. To align these signals with human intuition of maliciousness, SelfGrader introduces a dual-perspective scoring rule that considers both the maliciousness and benignness of the query, yielding a stable and interpretable score that reflects harmfulness and reduces the false positive rate simultaneously. Extensive experiments across diverse jailbreak benchmarks, multiple LLMs, and state-of-the-art guardrail baselines demonstrate that SelfGrader achieves up to a 22.66% reduction in ASR on LLaMA-3-8B, while maintaining significantly lower memory overhead (up to 173x) and latency (up to 26x).


Key Contributions

  • SelfGrader guardrail method that formulates jailbreak detection as numerical grading using token-level logits over a compact set of numerical tokens (0-9)
  • Dual-perspective logit (DPL) scoring rule that evaluates safety from both maliciousness and benignness perspectives to reduce false positives
  • Achieves up to 22.66% ASR reduction with 173x lower memory overhead and 26x lower latency compared to state-of-the-art baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
llm safetyjailbreak detectioncontent moderation