defense 2026

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Yu Lin 1, Qizhi Zhang 1, Wenqiang Ruan 1, Daode Zhang 1, Jue Hong 1, Ye Wu 1, Hanning Xia 2, Yunlong Mao 2, Sheng Zhong 2

0 citations

α

Published on arXiv

2603.01499

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

AloePri on a 671B DeepSeek model achieves 0–3.5% accuracy loss with plaintext-equivalent efficiency while limiting adversarial token recovery to under 5% against state-of-the-art inversion attacks.

AloePri

Novel technique introduced


The rapid development of large language models (LLMs) has driven the widespread adoption of cloud-based LLM inference services, while also bringing prominent privacy risks associated with the transmission and processing of private data in remote inference. For privacy-preserving LLM inference technologies to be practically applied in industrial scenarios, three core requirements must be satisfied simultaneously: (1) Accuracy and efficiency losses should be minimized to mitigate degradation in service experience. (2) The inference process can be run on large-scale clusters consist of heterogeneous legacy xPUs. (3) Compatibility with existing LLM infrastructures should be ensured to reuse their engineering optimizations. To the best of our knowledge, none of the existing privacy-preserving LLM inference methods satisfy all the above constraints while delivering meaningful privacy guarantees. In this paper, we propose AloePri, the first privacy-preserving LLM inference method for industrial applications. AloePri protects both the input and output data by covariant obfuscation, which jointly transforms data and model parameters to achieve better accuracy and privacy. We carefully design the transformation for each model component to ensure inference accuracy and data privacy while keeping full compatibility with existing infrastructures of Language Model as a Service. AloePri has been integrated into an industrial system for the evaluation of mainstream LLMs. The evaluation on Deepseek-V3.1-Terminus model (671B parameters) demonstrates that AloePri causes accuracy loss of 0.0%~3.5% and exhibits efficiency equivalent to that of plaintext inference. Meanwhile, AloePri successfully resists state-of-the-art attacks, with less than 5\% of tokens recovered. To the best of our knowledge, AloePri is the first method to exhibit practical applicability to large-scale models in real-world systems.


Key Contributions

  • Covariant obfuscation technique that jointly transforms user data and model parameters so intermediate activations reveal no meaningful input information to a cloud server
  • Component-level transformation designs for Attention and FFN blocks that preserve inference accuracy while providing privacy guarantees
  • Industrial-scale validation on DeepSeek-V3.1-Terminus (671B parameters) showing 0–3.5% accuracy loss, plaintext-equivalent efficiency, and resistance to state-of-the-art inversion attacks

🛡️ Threat Analysis

Model Inversion Attack

AloePri defends against 'internal state inversion' attacks in which an adversarial cloud server reconstructs the user's private input tokens from intermediate LLM activations — a direct form of embedding/activation inversion. The paper evaluates resistance to these reconstruction attacks, showing <5% of tokens recovered.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_time
Datasets
DeepSeek-V3.1-Terminus (671B)
Applications
cloud llm inferencelanguage model as a service (lmaas)