survey 2026

Decentralized Privacy-Preserving Federal Learning of Computer Vision Models on Edge Devices

Damian Harenčák 1, Lukáš Gajdošech 1,2, Martin Madaras 1,2

0 citations · 17 references · arXiv

α

Published on arXiv

2601.04912

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Gradient compression and noising reduce CNN classification accuracy while data reconstruction from segmentation network gradients is shown to be substantially more difficult than from classification networks.


Collaborative training of a machine learning model comes with a risk of sharing sensitive or private data. Federated learning offers a way of collectively training a single global model without the need to share client data, by sharing only the updated parameters from each client's local model. A central server is then used to aggregate parameters from all clients and redistribute the aggregated model back to the clients. Recent findings have shown that even in this scenario, private data can be reconstructed only using information about model parameters. Current efforts to mitigate this are mainly focused on reducing privacy risks on the server side, assuming that other clients will not act maliciously. In this work, we analyzed various methods for improving the privacy of client data concerning both the server and other clients for neural networks. Some of these methods include homomorphic encryption, gradient compression, gradient noising, and discussion on possible usage of modified federated learning systems such as split learning, swarm learning or fully encrypted models. We have analyzed the negative effects of gradient compression and gradient noising on the accuracy of convolutional neural networks used for classification. We have shown the difficulty of data reconstruction in the case of segmentation networks. We have also implemented a proof of concept on the NVIDIA Jetson TX2 module used in edge devices and simulated a federated learning process.


Key Contributions

  • Comparative analysis of privacy-preserving FL techniques (homomorphic encryption, gradient compression, gradient noising, split learning, swarm learning) against gradient reconstruction attacks
  • Experimental evaluation of accuracy degradation caused by gradient compression and noising on CNNs for classification and segmentation tasks
  • Proof-of-concept implementation of privacy-preserving FL on NVIDIA Jetson TX2 edge hardware

🛡️ Threat Analysis

Model Inversion Attack

The paper explicitly addresses gradient leakage attacks (DLG algorithm) in which adversaries reconstruct private client training data from shared model gradients — and evaluates multiple defenses (gradient compression, gradient noising, homomorphic encryption) against this data reconstruction threat in federated learning.


Details

Domains
visionfederated-learning
Model Types
cnnfederated
Threat Tags
white_boxtraining_time
Applications
federated learningedge computingimage classificationimage segmentation