attack 2025

Whisper Leak: a side-channel attack on Large Language Models

Geoff McDonald , Jonathan Bar Or

0 citations · 16 references · arXiv

α

Published on arXiv

2511.03675

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves >98% AUPRC on most of 28 tested LLMs; 17 of 28 models enable 100% precision at 5–20% recall under a 10,000:1 noise-to-target ratio using only encrypted traffic metadata.

Whisper Leak

Novel technique introduced


Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like "money laundering" while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies - random padding, token batching, and packet injection - finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.


Key Contributions

  • Whisper Leak: novel topic inference attack exploiting packet size and timing patterns in encrypted LLM streaming traffic to classify user prompt topics without decryption
  • Systematic evaluation across 28 commercial LLMs demonstrating industry-wide vulnerability, achieving >98% AUPRC and 100% precision at 5–20% recall under 10,000:1 noise-to-target imbalance
  • Assessment of three mitigation strategies (random padding, token batching, packet injection), showing each reduces but does not eliminate attack effectiveness

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
Quora Questions Pair
Applications
llm chat serviceshealthcare ailegal ai servicesconfidential communications