defense 2025

A New Perspective on Privacy Protection in Federated Learning with Granular-Ball Computing

Guannan Lai 1, Yihui Feng 1, Xin Yang 1, Xiaoyu Deng 1, Hao Yu 1, Shuyin Xia 2, Guoyin Wang 2, Tianrui Li 3

0 citations

α

Published on arXiv

2501.04940

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

GrBFL simultaneously reduces gradient reconstruction risk, improves communication efficiency, and maintains competitive classification accuracy compared to state-of-the-art FL baselines.

GrBFL (Granular-Ball Federated Learning)

Novel technique introduced


Federated Learning (FL) facilitates collaborative model training while prioritizing privacy by avoiding direct data sharing. However, most existing articles attempt to address challenges within the model's internal parameters and corresponding outputs, while neglecting to solve them at the input level. To address this gap, we propose a novel framework called Granular-Ball Federated Learning (GrBFL) for image classification. GrBFL diverges from traditional methods that rely on the finest-grained input data. Instead, it segments images into multiple regions with optimal coarse granularity, which are then reconstructed into a graph structure. We designed a two-dimensional binary search segmentation algorithm based on variance constraints for GrBFL, which effectively removes redundant information while preserving key representative features. Extensive theoretical analysis and experiments demonstrate that GrBFL not only safeguards privacy and enhances efficiency but also maintains robust utility, consistently outperforming other state-of-the-art FL methods. The code is available at https://github.com/AIGNLAI/GrBFL.


Key Contributions

  • GrBFL framework that transforms images into coarse-grained graph structures via granular-ball computing before FL training, reducing information available for gradient reconstruction attacks
  • Two-dimensional binary search segmentation algorithm based on variance constraints that removes redundant information while preserving classification-relevant features
  • Theoretical analysis showing that reducing input information content bounds the amount of data an attacker can reconstruct from shared gradients, without sacrificing model utility

🛡️ Threat Analysis

Model Inversion Attack

The paper's primary security contribution is defending against gradient leakage/reconstruction attacks in federated learning — an adversary reconstructs participants' training data from shared gradients. GrBFL reduces reconstructable information at the input stage by converting images to coarse-grained graph structures, theoretically and experimentally validated against reconstruction attacks (citing Zhu et al. 2019 'Deep Leakage from Gradients').


Details

Domains
visionfederated-learninggraph
Model Types
gnnfederatedcnn
Threat Tags
training_timewhite_box
Datasets
CIFAR-10
Applications
image classificationfederated learning