defense 2025

Do Internal Layers of LLMs Reveal Patterns for Jailbreak Detection?

Sri Durga Sai Sowmya Kadali , Evangelos E. Papalexakis

1 citations · 18 references · arXiv

α

Published on arXiv

2510.06594

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Internal hidden-layer representations of GPT-J and Mamba2 exhibit distinct layer-wise patterns for jailbreak versus benign prompts, suggesting internal representations are a promising signal for jailbreak detection.

Tensor decomposition of internal LLM representations for jailbreak detection

Novel technique introduced


Jailbreaking large language models (LLMs) has emerged as a pressing concern with the increasing prevalence and accessibility of conversational LLMs. Adversarial users often exploit these models through carefully engineered prompts to elicit restricted or sensitive outputs, a strategy widely referred to as jailbreaking. While numerous defense mechanisms have been proposed, attackers continuously develop novel prompting techniques, and no existing model can be considered fully resistant. In this study, we investigate the jailbreak phenomenon by examining the internal representations of LLMs, with a focus on how hidden layers respond to jailbreak versus benign prompts. Specifically, we analyze the open-source LLM GPT-J and the state-space model Mamba2, presenting preliminary findings that highlight distinct layer-wise behaviors. Our results suggest promising directions for further research on leveraging internal model dynamics for robust jailbreak detection and defense.


Key Contributions

  • Proposes analyzing internal hidden-layer representations of LLMs (GPT-J, Mamba2) to identify structural differences between jailbreak and benign prompts
  • Applies tensor decomposition methods for dimensionality reduction to compare latent-space behaviors across prompt types
  • Presents preliminary findings showing distinct layer-wise behavioral patterns for jailbreak vs. benign inputs in both transformer and state-space model architectures

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timewhite_box
Applications
conversational llm safetyjailbreak detection