defense 2026

Who Transfers Safety? Identifying and Targeting Cross-Lingual Shared Safety Neurons

Xianhui Zhang 1, Chengyu Xie 1, Linxia Zhu 1, Yonghui Yang 2, Weixiang Zhao 3, Zifeng Cheng 4, Cong Wang 4, Fei Shen 2, Tat-Seng Chua 2

0 citations · arXiv (Cornell University)

α

Published on arXiv

2602.01283

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Fine-tuning only ~0.6% of parameters (SS-Neurons) significantly enhances safety in non-high-resource languages, outperforming full LoRA-based baselines while maintaining general model capabilities.

SS-Neuron Expansion

Novel technique introduced


Multilingual safety remains significantly imbalanced, leaving non-high-resource (NHR) languages vulnerable compared to robust high-resource (HR) ones. Moreover, the neural mechanisms driving safety alignment remain unclear despite observed cross-lingual representation transfer. In this paper, we find that LLMs contain a set of cross-lingual shared safety neurons (SS-Neurons), a remarkably small yet critical neuronal subset that jointly regulates safety behavior across languages. We first identify monolingual safety neurons (MS-Neurons) and validate their causal role in safety refusal behavior through targeted activation and suppression. Our cross-lingual analyses then identify SS-Neurons as the subset of MS-Neurons shared between HR and NHR languages, serving as a bridge to transfer safety capabilities from HR to NHR domains. We observe that suppressing these neurons causes concurrent safety drops across NHR languages, whereas reinforcing them improves cross-lingual defensive consistency. Building on these insights, we propose a simple neuron-oriented training strategy that targets SS-Neurons based on language resource distribution and model architecture. Experiments demonstrate that fine-tuning this tiny neuronal subset outperforms state-of-the-art methods, significantly enhancing NHR safety while maintaining the model's general capabilities. The code and dataset will be available athttps://github.com/1518630367/SS-Neuron-Expansion.


Key Contributions

  • Identifies cross-lingual shared safety neurons (SS-Neurons), a sparse (<0.3%) subset of neurons that jointly regulate safety refusal behavior across high- and non-high-resource languages in LLMs.
  • Formalizes a neuron-masking attack showing current safety alignment is brittle and can be dismantled by ablating this sparse neuronal subset.
  • Proposes SS-Neuron Expansion, a targeted fine-tuning strategy (<0.6% of parameters) that outperforms state-of-the-art multilingual safety methods while preserving general model capabilities.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timetraining_timeblack_box
Datasets
AdvBench-xMultiJail
Applications
multilingual llm safetyjailbreak defensecross-lingual safety alignment