attack 2026

Large Language Lobotomy: Jailbreaking Mixture-of-Experts via Expert Silencing

Jona te Lintelo 1, Lichao Wu 2, Stjepan Picek 3,1

0 citations · 42 references · arXiv (Cornell University)

α

Published on arXiv

2602.08741

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adaptive expert silencing raises average jailbreak success from 7.3% to 70.4% (up to 86.3%) across 8 MoE LLMs while requiring silence of fewer than 20% of layer-wise experts.

Large Language Lobotomy (L³)

Novel technique introduced


The rapid adoption of Mixture-of-Experts (MoE) architectures marks a major shift in the deployment of Large Language Models (LLMs). MoE LLMs improve scaling efficiency by activating only a small subset of parameters per token, but their routing structure introduces new safety attack surfaces. We find that safety-critical behaviors in MoE LLMs (e.g., refusal) are concentrated in a small set of experts rather than being uniformly distributed. Building on this, we propose Large Language Lobotomy (L$^3$), a training-free, architecture-agnostic attack that compromises safety alignment by exploiting expert routing dynamics. L$^3$ learns routing patterns that correlate with refusal, attributes safety behavior to specific experts, and adaptively silences the most safety-relevant experts until harmful outputs are produced. We evaluate L$^3$ on eight state-of-the-art open-source MoE LLMs and show that our adaptive expert silencing increases average attack success from 7.3% to 70.4%, reaching up to 86.3%, outperforming prior training-free MoE jailbreak methods. Moreover, bypassing guardrails typically requires silencing fewer than 20% of layer-wise experts while largely preserving general language utility. These results reveal a fundamental tension between efficiency-driven MoE design and robust safety alignment and motivate distributing safety mechanisms more robustly in future MoE LLMs with architecture- and routing-aware methods.


Key Contributions

  • Empirical finding that safety-critical behaviors (refusal) in MoE LLMs are concentrated in a small, identifiable subset of experts rather than being uniformly distributed
  • L³ (Large Language Lobotomy): a training-free, architecture-agnostic attack that learns routing patterns correlated with refusal and adaptively silences the most safety-relevant experts until harmful outputs are produced
  • Evaluation on 8 open-source MoE LLMs showing average attack success rate increases from 7.3% to 70.4% (up to 86.3%) while silencing fewer than 20% of layer-wise experts and largely preserving general language utility

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
AdvBench
Applications
large language modelsmoe llmsai safety systems