defense 2026

Local Layer-wise Differential Privacy in Federated Learning

Yunbo Li , Jiaping Gui , Fanchao Meng , Yue Wu

0 citations · 69 references · arXiv

α

Published on arXiv

2601.01737

Model Inversion Attack

OWASP ML Top 10 — ML03

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

LaDP reduces noise injection by 46.14% and improves model accuracy by up to 102.99% over SOTA DP-FL methods while increasing the FID of adversarially reconstructed private data by >12.84% across all baselines.

LaDP

Novel technique introduced


Federated Learning (FL) enables collaborative model training without direct data sharing, yet it remains vulnerable to privacy attacks such as model inversion and membership inference. Existing differential privacy (DP) solutions for FL often inject noise uniformly across the entire model, degrading utility while providing suboptimal privacy-utility tradeoffs. To address this, we propose LaDP, a novel layer-wise adaptive noise injection mechanism for FL that optimizes privacy protection while preserving model accuracy. LaDP leverages two key insights: (1) neural network layers contribute unevenly to model utility, and (2) layer-wise privacy leakage can be quantified via KL divergence between local and global model distributions. LaDP dynamically injects noise into selected layers based on their privacy sensitivity and importance to model performance. We provide a rigorous theoretical analysis, proving that LaDP satisfies $(ε, δ)$-DP guarantees and converges under bounded noise. Extensive experiments on CIFAR-10/100 datasets demonstrate that LaDP reduces noise injection by 46.14% on average compared to state-of-the-art (SOTA) methods while improving accuracy by 102.99%. Under the same privacy budget, LaDP outperforms SOTA solutions like Dynamic Privacy Allocation LDP and AdapLDP by 25.18% and 6.1% in accuracy, respectively. Additionally, LaDP robustly defends against reconstruction attacks, increasing the FID of the reconstructed private data by $>$12.84% compared to all baselines. Our work advances the practical deployment of privacy-preserving FL with minimal utility loss.


Key Contributions

  • LaDP: a layer-wise adaptive noise injection mechanism that quantifies per-layer privacy leakage via KL divergence between local and global model distributions and injects noise proportionally
  • Rigorous (ε,δ)-DP guarantee and convergence proof for the layer-wise adaptive noise scheme
  • 46.14% average noise reduction and up to 102.99% accuracy improvement over SOTA uniform-DP baselines on CIFAR-10/100, with >12.84% FID increase on adversarially reconstructed data

🛡️ Threat Analysis

Model Inversion Attack

Paper explicitly defends against model inversion / data reconstruction attacks in FL — evaluates defense quality via FID of adversarially reconstructed private data, achieving >12.84% FID improvement over baselines. Gradient leakage → training data reconstruction is the primary adversarial threat modeled.

Membership Inference Attack

Paper also explicitly names membership inference as a threat LaDP defends against in the FL setting, directly motivating the (ε,δ)-DP design.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_timegrey_box
Datasets
CIFAR-10CIFAR-100
Applications
federated learningprivacy-preserving machine learning