attack 2025

GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs

Lichao Wu 1, Sasha Behrouzi 1, Mohamadreza Rostami 1, Stjepan Picek 2,3, Ahmad-Reza Sadeghi 1

2 citations · 66 references · arXiv

α

Published on arXiv

2512.21008

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Selectively disabling ~3% of neurons in identified safety expert layers raises average attack success rate from 7.4% to 64.9% across eight aligned MoE LLMs with limited utility degradation.

GateBreaker

Novel technique introduced


Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per input, enabling state-of-the-art performance with reduced computational cost. As these models are increasingly deployed in critical domains, understanding and strengthening their alignment mechanisms is essential to prevent harmful outputs. However, existing LLM safety research has focused almost exclusively on dense architectures, leaving the unique safety properties of MoEs largely unexamined. The modular, sparsely-activated design of MoEs suggests that safety mechanisms may operate differently than in dense models, raising questions about their robustness. In this paper, we present GateBreaker, the first training-free, lightweight, and architecture-agnostic attack framework that compromises the safety alignment of modern MoE LLMs at inference time. GateBreaker operates in three stages: (i) gate-level profiling, which identifies safety experts disproportionately routed on harmful inputs, (ii) expert-level localization, which localizes the safety structure within safety experts, and (iii) targeted safety removal, which disables the identified safety structure to compromise the safety alignment. Our study shows that MoE safety concentrates within a small subset of neurons coordinated by sparse routing. Selective disabling of these neurons, approximately 3% of neurons in the targeted expert layers, significantly increases the averaged attack success rate (ASR) from 7.4% to 64.9% against the eight latest aligned MoE LLMs with limited utility degradation. These safety neurons transfer across models within the same family, raising ASR from 17.9% to 67.7% with one-shot transfer attack. Furthermore, GateBreaker generalizes to five MoE vision language models (VLMs) with 60.9% ASR on unsafe image inputs.


Key Contributions

  • First training-free, architecture-agnostic framework (GateBreaker) that identifies and disables safety-concentrated neurons (~3%) in MoE expert layers to bypass safety alignment
  • Empirical finding that MoE safety mechanisms concentrate in a small subset of neurons coordinated by sparse routing, raising ASR from 7.4% to 64.9% across 8 aligned MoE LLMs
  • One-shot transferability of safety neuron locations across models in the same family (ASR 17.9%→67.7%) and generalization to 5 MoE vision-language models (60.9% ASR)

🛡️ Threat Analysis


Details

Domains
nlpmultimodal
Model Types
llmvlmtransformer
Threat Tags
white_boxtargetedinference_time
Datasets
AdvBenchHarmBench
Applications
llm safety alignmentmoe language modelsvision-language models