attack 2026

OptiLeak: Efficient Prompt Reconstruction via Reinforcement Learning in Multi-tenant LLM Services

Longxiang Wang 1, Xiang Zheng 1, Xuhao Zhang 2, Yao Zhang 2, Ye Wu 2, Cong Wang 1

0 citations · 49 references · arXiv (Cornell University)

α

Published on arXiv

2602.20595

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves up to 12.48× reduction in average requests per token for prompt reconstruction compared to baselines across medical and financial domains with models ranging from 3B to 14B parameters.

OptiLeak

Novel technique introduced


Multi-tenant LLM serving frameworks widely adopt shared Key-Value caches to enhance efficiency. However, this creates side-channel vulnerabilities enabling prompt leakage attacks. Prior studies identified these attack surfaces yet focused on expanding attack vectors rather than optimizing attack performance, reporting impractically high attack costs that underestimate the true privacy risk. We propose OptiLeak, a reinforcement learning-enhanced framework that maximizes prompt reconstruction efficiency through two-stage fine-tuning. Our key insight is that domain-specific ``hard tokens'' -- terms difficult to predict yet carrying sensitive information -- can be automatically identified via likelihood ranking and used to construct preference pairs for Direct Preference Optimization, eliminating manual annotation. This enables effective preference alignment while avoiding the overfitting issues of extended supervised fine-tuning. Evaluated on three benchmarks spanning medical and financial domains, OptiLeak achieves up to $12.48\times$ reduction in average requests per token compared to baseline approaches, with consistent improvements across model scales from 3B to 14B parameters. Our findings demonstrate that cache-based prompt leakage poses a more severe threat than previously reported, underscoring the need for robust cache isolation in production deployments.


Key Contributions

  • Identifies domain-specific 'hard tokens' via likelihood ranking to automatically construct DPO preference pairs for prompt reconstruction without manual annotation
  • Two-stage fine-tuning framework (SFT + DPO) that reduces average requests per token by up to 12.48× versus prior baseline approaches
  • Demonstrates that KV cache side-channel prompt leakage poses a substantially more severe privacy threat than previously estimated, urging robust cache isolation in production deployments

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
medical domain benchmarksfinancial domain benchmarks
Applications
multi-tenant llm servingllm api services