NeuroStrike: Neuron-Level Attacks on Aligned LLMs
Lichao Wu 1, Sasha Behrouzi 1, Mohamadreza Rostami 1, Maximilian Thang 1, Stjepan Picek 2,3, Ahmad-Reza Sadeghi 1
Published on arXiv
2509.11864
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Removes fewer than 0.6% of neurons to achieve 76.9% average attack success rate across 20+ aligned LLMs; black-box LLM profiling attack achieves 63.7% ASR on five proprietary models including the Google Gemini family
NeuroStrike
Novel technique introduced
Safety alignment is critical for the ethical deployment of large language models (LLMs), guiding them to avoid generating harmful or unethical content. Current alignment techniques, such as supervised fine-tuning and reinforcement learning from human feedback, remain fragile and can be bypassed by carefully crafted adversarial prompts. Unfortunately, such attacks rely on trial and error, lack generalizability across models, and are constrained by scalability and reliability. This paper presents NeuroStrike, a novel and generalizable attack framework that exploits a fundamental vulnerability introduced by alignment techniques: the reliance on sparse, specialized safety neurons responsible for detecting and suppressing harmful inputs. We apply NeuroStrike to both white-box and black-box settings: In the white-box setting, NeuroStrike identifies safety neurons through feedforward activation analysis and prunes them during inference to disable safety mechanisms. In the black-box setting, we propose the first LLM profiling attack, which leverages safety neuron transferability by training adversarial prompt generators on open-weight surrogate models and then deploying them against black-box and proprietary targets. We evaluate NeuroStrike on over 20 open-weight LLMs from major LLM developers. By removing less than 0.6% of neurons in targeted layers, NeuroStrike achieves an average attack success rate (ASR) of 76.9% using only vanilla malicious prompts. Moreover, Neurostrike generalizes to four multimodal LLMs with 100% ASR on unsafe image inputs. Safety neurons transfer effectively across architectures, raising ASR to 78.5% on 11 fine-tuned models and 77.7% on five distilled models. The black-box LLM profiling attack achieves an average ASR of 63.7% across five black-box models, including the Google Gemini family.
Key Contributions
- Identifies sparse 'safety neurons' in aligned LLMs via feedforward activation analysis and shows that pruning <0.6% of neurons in targeted layers disables safety mechanisms with 76.9% average ASR
- Proposes the first LLM profiling attack leveraging cross-architecture safety neuron transferability, enabling gradient-based adversarial prompt generators trained on open-weight surrogates to jailbreak black-box and proprietary models
- Demonstrates generalization to four multimodal LLMs with 100% ASR on unsafe image inputs, and transfers effectively to fine-tuned (78.5%) and distilled (77.7%) model variants
🛡️ Threat Analysis
The black-box LLM profiling attack trains adversarial prompt generators on open-weight surrogate models using gradient-based optimization, constituting gradient-based adversarial input manipulation transferred to black-box proprietary targets; co-tagged with LLM01 because the attack targets LLM safety alignment.