attack 2026

Activation Surgery: Jailbreaking White-box LLMs without Touching the Prompt

Maël Jenny , Jérémie Dentan , Sonia Vanier , Michaël Krajecki

0 citations

α

Published on arXiv

2603.14278

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Successfully jailbreaks white-box LLMs by replacing internal activations with those from benign prompts, bypassing safety mechanisms without prompt modification

Activation Surgery

Novel technique introduced


Most jailbreak techniques for Large Language Models (LLMs) primarily rely on prompt modifications, including paraphrasing, obfuscation, or conversational strategies. Meanwhile, abliteration techniques (also known as targeted ablations of internal components) have been used to study and explain LLM outputs by probing which internal structures causally support particular responses. In this work, we combine these two lines of research by directly manipulating the model's internal activations to alter its generation trajectory without changing the prompt. Our method constructs a nearby benign prompt and performs layer-wise activation substitutions using a sequential procedure. We show that this activation surgery method reveals where and how refusal arises, and prevents refusal signals from propagating across layers, thereby inhibiting the model's safety mechanisms. Finally, we discuss the security implications for open-weights models and instrumented inference environments.


Key Contributions

  • Novel white-box jailbreak method using layer-wise activation substitution instead of prompt manipulation
  • Demonstrates how refusal signals propagate across transformer layers and can be surgically removed
  • Reveals security implications for open-weights models with internal state access

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Applications
llm safety alignmentcontent filtering