attack 2025

Model Inversion in Split Learning for Personalized LLMs: New Insights from Information Bottleneck Theory

Yunmeng Shu 1, Shaofeng Li 2, Tian Dong 1, Yan Meng 1, Haojin Zhu 1

0 citations

α

Published on arXiv

2501.05965

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

RevertLM achieves attack scores of 38–75% across various split-learning scenarios, improving over state-of-the-art by more than 60%.

RevertLM

Novel technique introduced


Personalized Large Language Models (LLMs) have become increasingly prevalent, showcasing the impressive capabilities of models like GPT-4. This trend has also catalyzed extensive research on deploying LLMs on mobile devices. Feasible approaches for such edge-cloud deployment include using split learning. However, previous research has largely overlooked the privacy leakage associated with intermediate representations transmitted from devices to servers. This work is the first to identify model inversion attacks in the split learning framework for LLMs, emphasizing the necessity of secure defense. For the first time, we introduce mutual information entropy to understand the information propagation of Transformer-based LLMs and assess privacy attack performance for LLM blocks. To address the issue of representations being sparser and containing less information than embeddings, we propose a two-stage attack system in which the first part projects representations into the embedding space, and the second part uses a generative model to recover text from these embeddings. This design breaks down the complexity and achieves attack scores of 38%-75% in various scenarios, with an over 60% improvement over the SOTA. This work comprehensively highlights the potential privacy risks during the deployment of personalized LLMs on the edge side.


Key Contributions

  • First identification and demonstration of model inversion attacks against split learning for LLMs, recovering private input text from intermediate transformer representations.
  • Application of mutual information entropy (information bottleneck theory) to characterize information propagation across LLM blocks and predict attack difficulty at different split points.
  • RevertLM: a two-stage attack that first projects sparse intermediate representations into embedding space, then uses a generative model to decode text, achieving 38–75% attack scores and >60% improvement over SOTA.

🛡️ Threat Analysis

Model Inversion Attack

Core contribution is a data reconstruction attack: the adversary (curious server) recovers private input text from intermediate hidden-state representations transmitted during split learning — exactly the embedding inversion / gradient leakage scenario described in ML03.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetraining_time
Datasets
conversational text dataset (inferred from table examples)
Applications
personalized llm deploymentedge-cloud split learningmobile llm inference