attack 2025

RepetitionCurse: Measuring and Understanding Router Imbalance in Mixture-of-Experts LLMs under DoS Stress

Ruixuan Huang 1, Qingyue Wang 1, Hantao Huang 2, Yudong Gao 1, Dong Chen 1, Shuai Wang 1, Wei Wang 1

0 citations · 28 references · arXiv

α

Published on arXiv

2512.23995

Model Denial of Service

OWASP LLM Top 10 — LLM04

Key Finding

RepetitionCurse increases end-to-end inference latency by 3.063x on Mixtral-8x7B using low-cost, model-agnostic repetitive token prompts.

RepetitionCurse

Novel technique introduced


Mixture-of-Experts architectures have become the standard for scaling large language models due to their superior parameter efficiency. To accommodate the growing number of experts in practice, modern inference systems commonly adopt expert parallelism to distribute experts across devices. However, the absence of explicit load balancing constraints during inference allows adversarial inputs to trigger severe routing concentration. We demonstrate that out-of-distribution prompts can manipulate the routing strategy such that all tokens are consistently routed to the same set of top-$k$ experts, which creates computational bottlenecks on certain devices while forcing others to idle. This converts an efficiency mechanism into a denial-of-service attack vector, leading to violations of service-level agreements for time to first token. We propose RepetitionCurse, a low-cost black-box strategy to exploit this vulnerability. By identifying a universal flaw in MoE router behavior, RepetitionCurse constructs adversarial prompts using simple repetitive token patterns in a model-agnostic manner. On widely deployed MoE models like Mixtral-8x7B, our method increases end-to-end inference latency by 3.063x, degrading service availability significantly.


Key Contributions

  • Identifies a universal vulnerability in MoE routers: the absence of load-balancing constraints at inference time allows out-of-distribution inputs to concentrate routing onto a fixed expert subset.
  • Proposes RepetitionCurse, a model-agnostic black-box attack that constructs adversarial prompts from simple repetitive token patterns to trigger severe router imbalance.
  • Demonstrates a 3.063x end-to-end inference latency increase on Mixtral-8x7B, converting expert parallelism from an efficiency mechanism into a DoS vector.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timeuntargeted
Datasets
Mixtral-8x7B evaluation
Applications
llm inference servingmixture-of-experts llms