defense 2026

FedFG: Privacy-Preserving and Robust Federated Learning via Flow-Matching Generation

Ruiyang Wang , Rong Pan , Zhengan Yao

0 citations

α

Published on arXiv

2603.27986

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Achieves higher accuracy than prior federated learning defenses under multiple poisoning attack strategies while maintaining strong privacy protection against gradient inversion

FedFG

Novel technique introduced


Federated learning (FL) enables distributed clients to collaboratively train a global model using local private data. Nevertheless, recent studies show that conventional FL algorithms still exhibit deficiencies in privacy protection, and the server lacks a reliable and stable aggregation rule for updating the global model. This situation creates opportunities for adversaries: on the one hand, they may eavesdrop on uploaded gradients or model parameters, potentially leaking benign clients' private data; on the other hand, they may compromise clients to launch poisoning attacks that corrupt the global model. To balance accuracy and security, we propose FedFG, a robust FL framework based on flow-matching generation that simultaneously preserves client privacy and resists sophisticated poisoning attacks. On the client side, each local network is decoupled into a private feature extractor and a public classifier. Each client is further equipped with a flow-matching generator that replaces the extractor when interacting with the server, thereby protecting private features while learning an approximation of the underlying data distribution. Complementing the client-side design, the server employs a client-update verification scheme and a novel robust aggregation mechanism driven by synthetic samples produced by the flow-matching generator. Experiments on MNIST, FMNIST, and CIFAR-10 demonstrate that, compared with prior work, our approach adapts to multiple attack strategies and achieves higher accuracy while maintaining strong privacy protection.


Key Contributions

  • Flow-matching generator architecture that decouples private feature extractors from public classifiers, protecting client privacy during parameter sharing
  • Server-side client-update verification scheme using synthetic samples to detect poisoning attacks without accessing private features
  • Unified framework that simultaneously defends against gradient reconstruction attacks (privacy) and Byzantine poisoning attacks (robustness)

🛡️ Threat Analysis

Data Poisoning Attack

Defends against poisoning attacks in federated learning where malicious clients corrupt the global model through manipulated updates. The paper proposes robust aggregation mechanisms and client-update verification to detect and mitigate Byzantine/poisoning behavior.

Model Inversion Attack

Protects against gradient reconstruction attacks (privacy attacks that recover training data from gradients/parameters). The flow-matching generator replaces private feature extractors during server interaction, preventing adversaries from reconstructing clients' private data from uploaded gradients.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timegrey_box
Datasets
MNISTFashion-MNISTCIFAR-10
Applications
federated learningimage classification