defense 2025

Unraveling LLM Jailbreaks Through Safety Knowledge Neurons

Chongwen Zhao , Yutong Ke , Kaizhu Huang

0 citations

α

Published on arXiv

2509.01631

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SafeTuning consistently reduces jailbreak attack success rates across multiple LLMs and outperforms all four baseline defenses by reinforcing safety-critical neurons identified through neuron-level interpretability.

SafeTuning

Novel technique introduced


Large Language Models (LLMs) are increasingly attracting attention in various applications. Nonetheless, there is a growing concern as some users attempt to exploit these models for malicious purposes, including the synthesis of controlled substances and the propagation of disinformation, a technique known as "Jailbreak." While some studies have achieved defenses against jailbreak attacks by modifying output distributions or detecting harmful content, the exact rationale still remains elusive. In this work, we present a novel neuron-level interpretability method that focuses on the role of safety-related knowledge neurons. Unlike existing approaches, our method projects the model's internal representation into a more consistent and interpretable vocabulary space. We then show that adjusting the activation of safety-related neurons can effectively control the model's behavior with a mean ASR higher than 97%. Building on this insight, we propose SafeTuning, a fine-tuning strategy that reinforces safety-critical neurons to improve model robustness against jailbreaks. SafeTuning consistently reduces attack success rates across multiple LLMs and outperforms all four baseline defenses. These findings offer a new perspective on understanding and defending against jailbreak attacks.


Key Contributions

  • Neuron-level interpretability method that projects internal LLM representations into vocabulary space to identify safety-related knowledge neurons
  • Demonstrates that directly adjusting safety neuron activations achieves >97% mean attack success rate, validating their causal role in safety behavior
  • SafeTuning: a fine-tuning strategy that reinforces safety-critical neurons, outperforming four baseline jailbreak defenses across multiple LLMs

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetraining_time
Applications
large language model safetyjailbreak defense