defense 2025

Verifiable Split Learning via zk-SNARKs

Rana Alaa , Darío González-Ferreiro , Carlos Beis-Penedo , Manuel Fernández-Veiga , Rebeca P. Díaz-Redondo , Ana Fernández-Vilas

0 citations · 19 references · arXiv

α

Published on arXiv

2511.01356

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

zk-SNARK integration achieves cryptographic verifiability of split learning computations for both parties, while blockchain-based alternatives provide immutable record-keeping but cannot verify that computations were performed correctly

Verifiable Split Learning via zk-SNARKs

Novel technique introduced


Split learning is an approach to collaborative learning in which a deep neural network is divided into two parts: client-side and server-side at a cut layer. The client side executes its model using its raw input data and sends the intermediate activation to the server side. This configuration architecture is very useful for enabling collaborative training when data or resources are separated between devices. However, split learning lacks the ability to verify the correctness and honesty of the computations that are performed and exchanged between the parties. To this purpose, this paper proposes a verifiable split learning framework that integrates a zk-SNARK proof to ensure correctness and verifiability. The zk-SNARK proof and verification are generated for both sides in forward propagation and backward propagation on the server side, guaranteeing verifiability on both sides. The verifiable split learning architecture is compared to a blockchain-enabled system for the same deep learning network, one that records updates but without generating the zero-knowledge proof. From the comparison, it can be deduced that applying the zk-SNARK test achieves verifiability and correctness, while blockchains are lightweight but unverifiable.


Key Contributions

  • Two-sided zk-SNARK scheme for split learning that verifies both client-side activations and server-side forward/backward computations without revealing private data
  • Arithmetic circuit design to encode and verify split learning computation integrity throughout the training cycle
  • Comparative analysis against blockchain-enabled split learning, demonstrating that zk-SNARKs provide cryptographic verifiability while blockchain provides only lightweight immutability

🛡️ Threat Analysis

Output Integrity Attack

The paper proposes verifiable computation schemes using zk-SNARKs to prove that forward propagation activations and server-side backward propagation gradients were computed correctly — directly defending against tampering with or falsifying ML computation outputs exchanged between parties. This maps to 'Verifiable inference schemes (proving outputs weren't tampered with)' extended to training-time computations.


Details

Domains
federated-learning
Model Types
cnn
Threat Tags
training_timegrey_box
Applications
split learningcollaborative trainingdistributed mlhealthcare mliot ml