defense 2026

Guarding the Middle: Protecting Intermediate Representations in Federated Split Learning

Obaidullah Zaland 1, Sajib Mistry 2, Monowar Bhuyan 1

0 citations · 33 references · BigData Congress

α

Published on arXiv

2602.17614

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

KD-UFSL degrades reconstruction of private client images by up to 50% MSE increase and 40% SSIM reduction while maintaining global model utility across four benchmarking datasets.

KD-UFSL

Novel technique introduced


Big data scenarios, where massive, heterogeneous datasets are distributed across clients, demand scalable, privacy-preserving learning methods. Federated learning (FL) enables decentralized training of machine learning (ML) models across clients without data centralization. Decentralized training, however, introduces a computational burden on client devices. U-shaped federated split learning (UFSL) offloads a fraction of the client computation to the server while keeping both data and labels on the clients' side. However, the intermediate representations (i.e., smashed data) shared by clients with the server are prone to exposing clients' private data. To reduce exposure of client data through intermediate data representations, this work proposes k-anonymous differentially private UFSL (KD-UFSL), which leverages privacy-enhancing techniques such as microaggregation and differential privacy to minimize data leakage from the smashed data transferred to the server. We first demonstrate that an adversary can access private client data from intermediate representations via a data-reconstruction attack, and then present a privacy-enhancing solution, KD-UFSL, to mitigate this risk. Our experiments indicate that, alongside increasing the mean squared error between the actual and reconstructed images by up to 50% in some cases, KD-UFSL also decreases the structural similarity between them by up to 40% on four benchmarking datasets. More importantly, KD-UFSL improves privacy while preserving the utility of the global model. This highlights its suitability for large-scale big data applications where privacy and utility must be balanced.


Key Contributions

  • Demonstrates that a curious server can reconstruct private client data from smashed data (intermediate representations) in U-shaped federated split learning
  • Proposes KD-UFSL combining feature-level k-anonymity via microaggregation and data-level differential privacy to protect intermediate representations
  • Empirically shows KD-UFSL increases reconstruction MSE by up to 50% and reduces SSIM by up to 40% across four datasets while preserving global model utility

🛡️ Threat Analysis

Model Inversion Attack

The core threat model is a curious server adversary reconstructing private client training data from intermediate feature representations (smashed data) shared during U-shaped federated split learning. The paper first demonstrates the reconstruction attack (model inversion via feature leakage), then proposes KD-UFSL as a defense using microaggregation and DP to degrade reconstruction quality by up to 50% MSE increase and 40% SSIM decrease.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timewhite_box
Applications
federated split learningprivacy-preserving distributed training