defense 2026

Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement

Amirhossein Farzam 1,2,3, Majid Behabahani 3, Mani Malek 4, Yuriy Nevmyvaka 3, Guillermo Sapiro 2,5

0 citations · 41 references · arXiv (Cornell University)

α

Published on arXiv

2602.19396

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

FrameShield improves model-agnostic jailbreak detection across multiple LLM families by operating on disentangled framing representations extracted from frozen model activations, outperforming heuristic and goal-signature-based baselines on concealed jailbreaks.

FrameShield / ReDAct

Novel technique introduced


Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with standard heuristics. A particularly challenging failure mode occurs when an attacker tries to hide the malicious goal of their request by manipulating its framing to induce compliance. Because these attacks maintain malicious intent through a flexible presentation, defenses that rely on structural artifacts or goal-specific signatures can fail. Motivated by this, we introduce a self-supervised framework for disentangling semantic factor pairs in LLM activations at inference. We instantiate the framework for goal and framing and construct GoalFrameBench, a corpus of prompts with controlled goal and framing variations, which we use to train Representation Disentanglement on Activations (ReDAct) module to extract disentangled representations in a frozen LLM. We then propose FrameShield, an anomaly detector operating on the framing representations, which improves model-agnostic detection across multiple LLM families with minimal computational overhead. Theoretical guarantees for ReDAct and extensive empirical validations show that its disentanglement effectively powers FrameShield. Finally, we use disentanglement as an interpretability probe, revealing distinct profiles for goal and framing signals and positioning semantic disentanglement as a building block for both LLM safety and mechanistic interpretability.


Key Contributions

  • GoalFrameBench: a corpus of prompts with controlled goal and framing variations for training and evaluating disentanglement-based defenses
  • ReDAct (Representation Disentanglement on Activations): a self-supervised module that extracts disentangled goal and framing representations from frozen LLM activations with theoretical guarantees
  • FrameShield: an anomaly detector operating on disentangled framing representations that achieves model-agnostic jailbreak detection across multiple LLM families with minimal overhead

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Datasets
GoalFrameBench
Applications
llm safetyjailbreak detectionchatbot security