defense 2025

Shadow in the Cache: Unveiling and Mitigating Privacy Risks of KV-cache in LLM Inference

Zhifan Luo 1, Shuo Shao 1, Su Zhang 2, Lijing Zhou 2, Yuke Hu 1, Chenxu Zhao 1, Zhihao Liu 1, Zhan Qin 1,3

0 citations

α

Published on arXiv

2508.09442

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

KV-Cloak thwarts all three KV-cache reconstruction attacks, reducing output quality to random noise with virtually no degradation in model accuracy or inference throughput.

KV-Cloak

Novel technique introduced


The Key-Value (KV) cache, which stores intermediate attention computations (Key and Value pairs) to avoid redundant calculations, is a fundamental mechanism for accelerating Large Language Model (LLM) inference. However, this efficiency optimization introduces significant yet underexplored privacy risks. This paper provides the first comprehensive analysis of these vulnerabilities, demonstrating that an attacker can reconstruct sensitive user inputs directly from the KV-cache. We design and implement three distinct attack vectors: a direct Inversion Attack, a more broadly applicable and potent Collision Attack, and a semantic-based Injection Attack. These methods demonstrate the practicality and severity of KV-cache privacy leakage issues. To mitigate this, we propose KV-Cloak, a novel, lightweight, and efficient defense mechanism. KV-Cloak uses a reversible matrix-based obfuscation scheme, combined with operator fusion, to secure the KV-cache. Our extensive experiments show that KV-Cloak effectively thwarts all proposed attacks, reducing reconstruction quality to random noise. Crucially, it achieves this robust security with virtually no degradation in model accuracy and minimal performance overhead, offering a practical solution for trustworthy LLM deployment.


Key Contributions

  • First comprehensive study of KV-cache privacy risks: three novel attack vectors (Inversion Attack using model weights, Collision Attack via forward-pass matching, semantic Injection Attack) that reconstruct private user inputs from KV-cache
  • KV-Cloak: a lightweight reversible matrix-based obfuscation scheme with operator fusion that secures KV-cache against all three attack classes
  • Empirical demonstration that KV-Cloak reduces reconstruction quality to random noise with negligible model accuracy degradation and minimal latency overhead

🛡️ Threat Analysis

Model Inversion Attack

The three attack vectors (Inversion, Collision, Injection) all aim to recover private user input data from KV-cache intermediate representations — this is embedding inversion / model inversion recovering inference-time private data. KV-Cloak is a direct defense against this data reconstruction threat.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Applications
llm inference servicesmodel-as-a-service (maas)tee-based confidential inference