defense 2026

Improving User Privacy in Personalized Generation: Client-Side Retrieval-Augmented Modification of Server-Side Generated Speculations

Alireza Salemi , Hamed Zamani

0 citations · 72 references · arXiv

α

Published on arXiv

2601.17569

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

P³ recovers 90.3%–95.7% of full-profile personalization utility while introducing only 1.5%–3.5% marginal privacy leakage compared to non-personalized queries against linkability and attribute inference attacks.

P³ (Client-Side Retrieval-Augmented Modification)

Novel technique introduced


Personalization is crucial for aligning Large Language Model (LLM) outputs with individual user preferences and background knowledge. State-of-the-art solutions are based on retrieval augmentation, where relevant context from a user profile is retrieved for LLM consumption. These methods deal with a trade-off between exposing retrieved private data to cloud providers and relying on less capable local models. We introduce $P^3$, an interactive framework for high-quality personalization without revealing private profiles to server-side LLMs. In $P^3$, a large server-side model generates a sequence of $k$ draft tokens based solely on the user query, while a small client-side model, with retrieval access to the user's private profile, evaluates and modifies these drafts to better reflect user preferences. This process repeats until an end token is generated. Experiments on LaMP-QA, a recent benchmark consisting of three personalized question answering datasets, show that $P^3$ consistently outperforms both non-personalized server-side and personalized client-side baselines, achieving statistically significant improvements of $7.4%$ to $9%$ on average. Importantly, $P^3$ recovers $90.3%$ to $95.7%$ of the utility of a ``leaky'' upper-bound scenario in which the full profile is exposed to the large server-side model. Privacy analyses, including linkability and attribute inference attacks, indicate that $P^3$ preserves the privacy of a non-personalized server-side model, introducing only marginal additional leakage ($1.5%$--$3.5%$) compared to submitting a query without any personal context. Additionally, the framework is efficient for edge deployment, with the client-side model generating only $9.2%$ of the total tokens. These results demonstrate that $P^3$ provides a practical, effective solution for personalized generation with improved privacy.


Key Contributions

  • P³ framework: a speculative-decoding-style protocol where a server-side LLM generates draft tokens and a small client-side model with private RAG access evaluates and modifies them, keeping user profiles off the server.
  • Empirical privacy analysis using linkability and attribute inference attacks showing only 1.5%–3.5% marginal leakage versus non-personalized querying.
  • Recovers 90.3%–95.7% of the utility of a fully leaky (profile-exposed) upper bound, outperforming both non-personalized server-side and personalized client-side baselines by 7.4%–9%.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
LaMP-QA
Applications
personalized question answeringllm personalization