Jiaming Liang

h-index: 0 0 citations 0 papers (total)

Papers in Database (1)

defense arXiv Feb 12, 2026 · 7w ago

SafeNeuron: Neuron-Level Safety Alignment for Large Language Models

Zhaoxin Wang, Jiaming Liang, Fengbin Zhu et al. · Xidian University · National University of Singapore +1 more

Defends LLM safety alignment against neuron pruning attacks by redistributing safety representations across the network via selective neuron freezing

Prompt Injection nlpmultimodal
PDF