defense 2026

Your Inference Request Will Become a Black Box: Confidential Inference for Cloud-based Large Language Models

Chung-ju Huang 1,2, Huiqiang Zhao 2, Yuanpeng He 1, Lijian Li 3, Wenpin Jiao 1, Zhi Jin 1, Peixuan Chen 2, Leye Wang 1

0 citations

α

Published on arXiv

2603.00196

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Reduces token reconstruction accuracy from over 97.5% to an average of 1.34% while guaranteeing output identical to the original unmodified LLM and preserving scalability.

Talaria / ReMO (Reversible Masked Outsourcing)

Novel technique introduced


The increasing reliance on cloud-hosted Large Language Models (LLMs) exposes sensitive client data, such as prompts and responses, to potential privacy breaches by service providers. Existing approaches fail to ensure privacy, maintain model performance, and preserve computational efficiency simultaneously. To address this challenge, we propose Talaria, a confidential inference framework that partitions the LLM pipeline to protect client data without compromising the cloud's model intellectual property or inference quality. Talaria executes sensitive, weight-independent operations within a client-controlled Confidential Virtual Machine (CVM) while offloading weight-dependent computations to the cloud GPUs. The interaction between these environments is secured by our Reversible Masked Outsourcing (ReMO) protocol, which uses a hybrid masking technique to reversibly obscure intermediate data before outsourcing computations. Extensive evaluations show that Talaria can defend against state-of-the-art token inference attacks, reducing token reconstruction accuracy from over 97.5% to an average of 1.34%, all while being a lossless mechanism that guarantees output identical to the original model without significantly decreasing efficiency and scalability. To the best of our knowledge, this is the first work that ensures clients' prompts and responses remain inaccessible to the cloud, while also preserving model privacy, performance, and efficiency.


Key Contributions

  • Talaria: a confidential inference framework that partitions LLM computation between a client-controlled Confidential Virtual Machine (for weight-independent ops) and cloud GPUs (for weight-dependent ops), protecting prompts without exposing model weights
  • Reversible Masked Outsourcing (ReMO) protocol: a hybrid masking technique that reversibly obscures intermediate representations before outsourcing to cloud GPUs, enabling lossless output invariance
  • Demonstrated defense against state-of-the-art token inference attacks, reducing reconstruction accuracy from >97.5% to an average of 1.34% with modest efficiency overhead

🛡️ Threat Analysis

Model Inversion Attack

The primary threat defended against is token inference attacks — adversarial cloud providers reconstructing client prompt tokens from intermediate activations/embeddings during inference. This is embedding inversion (a form of model inversion), and the paper directly evaluates its defense against these reconstruction attacks, reducing accuracy from 97.5% to 1.34%.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Applications
cloud llm inferenceprivacy-preserving inferenceenterprise llm deployment