attack 2026

Rethinking Latency Denial-of-Service: Attacking the LLM Serving Framework, Not the Model

Tianyi Wang , Huawei Fan , Yuanchao Shu , Peng Cheng , Cong Wang

0 citations · 68 references · arXiv (Cornell University)

α

Published on arXiv

2602.07878

Model Denial of Service

OWASP LLM Top 10 — LLM04

Key Finding

Achieves 20-280x average slowdown on Time to First Token and 1.5-4x on Time Per Output Token compared to existing latency attacks, with 30-40% lower attack cost, in black-box settings against vLLM/SGLang-style serving systems.

Fill and Squeeze (F&S)

Novel technique introduced


Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest slowdowns can translate into substantial operating costs and severe availability risks. Recently, a growing body of research has focused on algorithmic complexity attacks by crafting inputs to trigger worst-case output lengths. However, we report a counter-intuitive finding that these algorithmic latency attacks are largely ineffective against modern LLM serving systems. We reveal that system-level optimization such as continuous batching provides a logical isolation to mitigate contagious latency impact on co-located users. To this end, in this paper, we shift the focus from the algorithm to the system layer, and introduce a new Fill and Squeeze attack strategy targeting the state transition of the scheduler. "Fill" first exhausts the global KV cache to induce Head-of-Line blocking, while "Squeeze" forces the system into repetitive preemption. By manipulating output lengths using methods from simple plain-text prompts to more complex prompt engineering, and leveraging side-channel probing of memory status, we demonstrate that the attack can be orchestrated in a black-box setting with much less cost. Extensive evaluations indicate by up to 20-280x average slowdown on Time to First Token and 1.5-4x average slowdown on Time Per Output Token compared to existing attacks with 30-40% lower attack cost.


Key Contributions

  • Demonstrates that existing model-centric latency attacks are ineffective against continuous batching in modern LLM serving systems, which provides logical isolation between attacker and victim requests
  • Introduces Fill and Squeeze (F&S), a system-layer attack that exhausts the global KV cache to trigger Head-of-Line blocking (Fill) and forces repetitive scheduler preemption (Squeeze)
  • Develops a black-box side-channel probing mechanism using Inter-Token Latency as a proxy for GPU memory usage, enabling cost-efficient attack scheduling via a lightweight LightGBM regressor

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm serving frameworksllm inference apis