Efficient and Verifiable Privacy-Preserving Convolutional Computation for CNN Inference with Untrusted Clouds
Jinyu Lu 1, Xinrong Sun 1, Yunting Tao 2, Tong Ji 1, Fanyu Kong 1,3, Guoqiang Yang 3
Published on arXiv
2508.12832
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Achieves 26–87× speedup over original plaintext CNN models while maintaining accuracy and providing verifiable output correctness against an untrusted cloud
The widespread adoption of convolutional neural networks (CNNs) in resource-constrained scenarios has driven the development of Machine Learning as a Service (MLaaS) system. However, this approach is susceptible to privacy leakage, as the data sent from the client to the untrusted cloud server often contains sensitive information. Existing CNN privacy-preserving schemes, while effective in ensuring data confidentiality through homomorphic encryption and secret sharing, face efficiency bottlenecks, particularly in convolution operations. In this paper, we propose a novel verifiable privacy-preserving scheme tailored for CNN convolutional layers. Our scheme enables efficient encryption and decryption, allowing resource-constrained clients to securely offload computations to the untrusted cloud server. Additionally, we present a verification mechanism capable of detecting the correctness of the results with a success probability of at least $1-\frac{1}{\left|Z\right|}$. Extensive experiments conducted on 10 datasets and various CNN models demonstrate that our scheme achieves speedups ranging $26 \times$ ~ $\ 87\times$ compared to the original plaintext model while maintaining accuracy.
Key Contributions
- Novel encryption/decryption scheme tailored for CNN convolutional layers enabling efficient privacy-preserving offloading to untrusted cloud servers
- Verification mechanism detecting incorrect or tampered computation results from the cloud with success probability at least 1−1/|Z|
- Evaluation across 10 datasets and multiple CNN architectures demonstrating 26–87× speedup over plaintext while preserving model accuracy
🛡️ Threat Analysis
The paper explicitly proposes a verification mechanism to detect when an untrusted cloud returns incorrect or tampered inference results — directly implementing a verifiable inference scheme that proves output correctness, which is the defining use case for ML09's 'verifiable inference schemes' sub-category.