defense 2026

SafeNeuron: Neuron-Level Safety Alignment for Large Language Models

Zhaoxin Wang 1, Jiaming Liang 1, Fengbin Zhu 2, Weixiang Zhao 3, Junfeng Fang 2, Jiayi Ji 2, Handing Wang 1, Tat-Seng Chua 2

0 citations · 57 references · arXiv (Cornell University)

α

Published on arXiv

2602.12158

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SafeNeuron significantly improves robustness against neuron pruning attacks and reduces the risk of open-source models being repurposed as red-team generators while preserving general model capabilities.

SafeNeuron

Novel technique introduced


Large language models (LLMs) and multimodal LLMs are typically safety-aligned before release to prevent harmful content generation. However, recent studies show that safety behaviors are concentrated in a small subset of parameters, making alignment brittle and easily bypassed through neuron-level attacks. Moreover, most existing alignment methods operate at the behavioral level, offering limited control over the model's internal safety mechanisms. In this work, we propose SafeNeuron, a neuron-level safety alignment framework that improves robustness by redistributing safety representations across the network. SafeNeuron first identifies safety-related neurons, then freezes these neurons during preference optimization to prevent reliance on sparse safety pathways and force the model to construct redundant safety representations. Extensive experiments across models and modalities demonstrate that SafeNeuron significantly improves robustness against neuron pruning attacks, reduces the risk of open-source models being repurposed as red-team generators, and preserves general capabilities. Furthermore, our layer-wise analysis reveals that safety behaviors are governed by stable and shared internal representations. Overall, SafeNeuron provides an interpretable and robust perspective for model alignment.


Key Contributions

  • Training-free safety neuron identification method that pinpoints sparse internal safety-critical neurons in LLMs/VLMs
  • SafeNeuron alignment framework that freezes identified safety neurons during preference optimization to force construction of distributed, redundant safety representations
  • Layer-wise analysis revealing that safety behaviors are governed by stable, shared internal representations across LLMs and VLMs

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlmtransformer
Threat Tags
white_boxtraining_time
Applications
large language modelsvision-language modelsconversational ai