defense 2025

Exploiting the Experts: Unauthorized Compression in MoE-LLMs

Pinaki Prasad Guha Neogi 1, Ahmad Mohammadshirazi 1,2, Dheeraj Kulshrestha 2, Rajiv Ramnath 1

0 citations · 37 references · arXiv

α

Published on arXiv

2511.19480

Model Theft

OWASP ML Top 10 — ML05

Model Theft

OWASP LLM Top 10 — LLM10

Key Finding

Expert pruning of MoE-LLMs can yield viable task-specific models, but entangled expert training and selective fine-tuning protocols substantially increase the cost of unauthorized compression and re-alignment.

Expert Attribution Framework with Entangled Expert Training

Novel technique introduced


Mixture-of-Experts (MoE) architectures are increasingly adopted in large language models (LLMs) for their scalability and efficiency. However, their modular structure introduces a unique vulnerability: adversaries can attempt to compress or repurpose models by pruning experts and cheaply fine-tuning the remainder, effectively bypassing licensing and security constraints. In this paper, we systematically study the prunability of MoE-LLMs under task-specific usage. We first develop an expert attribution framework that identifies the subset of experts most responsible for a given task, then evaluate the performance trade-offs of pruning and re-aligning these experts using active learning-driven fine-tuning. Our findings reveal a critical knowledge loss--recovery trade-off: while certain experts can be isolated to retain task accuracy, significant degradation occurs without targeted re-alignment. Based on this analysis, we propose defense strategies that aim to make MoE models harder to compress and fine-tune without authorization, including entangled expert training and selective fine-tuning protocols that resist unauthorized adaptation. By positioning expert pruning as both a threat vector and a defense target, this work highlights the dual-use nature of MoE modularity and provides the first systematic evaluation framework for secure specialization of MoE-LLMs.


Key Contributions

  • Expert attribution framework that identifies the subset of MoE experts responsible for a given task, enabling targeted pruning attacks
  • Systematic evaluation of the knowledge loss–recovery trade-off when pruning and re-aligning MoE experts via active learning-driven fine-tuning
  • Defense strategies — entangled expert training and selective fine-tuning protocols — that resist unauthorized compression and adaptation of MoE-LLMs

🛡️ Threat Analysis

Model Theft

The core threat is adversaries stealing model functionality by pruning experts and cheaply fine-tuning the remainder, bypassing licensing — a model IP theft attack. Defenses (entangled expert training, selective fine-tuning protocols) aim to prevent unauthorized model compression and reuse.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxtraining_timetargeted
Applications
large language modelsmoe architecturesmodel licensing enforcement