Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning
Xiaojin Zhang 1, Mingcong Xu 1, Yiming Li 1, Wei Chen 1, Qiang Yang 2
Published on arXiv
2508.11907
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
Derives quantitative bounds showing that attack complexity scales with privacy leakage and gradient distortion while protection complexity scales with model dimensionality and privacy budget, illuminating the fundamental attack-defense trade-off in privacy-preserving FL.
Maximum Bayesian Privacy (MBP)
Novel technique introduced
Federated learning (FL) offers a promising paradigm for collaborative model training while preserving data privacy. However, its susceptibility to gradient inversion attacks poses a significant challenge, necessitating robust privacy protection mechanisms. This paper introduces a novel theoretical framework to decipher the intricate interplay between attack and protection complexities in privacy-preserving FL. We formally define "Attack Complexity" as the minimum computational and data resources an adversary requires to reconstruct private data below a given error threshold, and "Protection Complexity" as the expected distortion introduced by privacy mechanisms. Leveraging Maximum Bayesian Privacy (MBP), we derive tight theoretical bounds for protection complexity, demonstrating its scaling with model dimensionality and privacy budget. Furthermore, we establish comprehensive bounds for attack complexity, revealing its dependence on privacy leakage, gradient distortion, model dimension, and the chosen privacy level. Our findings quantitatively illuminate the fundamental trade-offs between privacy guarantees, system utility, and the effort required for both attacking and defending. This framework provides critical insights for designing more secure and efficient federated learning systems.
Key Contributions
- Formal definitions of 'Attack Complexity' (minimum adversarial resources to reconstruct private data below an error threshold) and 'Protection Complexity' (expected distortion from privacy mechanisms) in FL
- Tight theoretical bounds for protection complexity using Maximum Bayesian Privacy (MBP), showing scaling with model dimensionality and privacy budget
- Comprehensive bounds for attack complexity revealing its dependence on privacy leakage, gradient distortion, model dimension, and chosen privacy level
🛡️ Threat Analysis
The paper's central concern is gradient inversion attacks in FL, where an adversary reconstructs participants' private training data from shared gradients. 'Attack Complexity' is formally defined as the minimum resources an adversary needs to reconstruct private data below an error threshold — this is precisely the model-inversion/gradient-leakage threat. The protection analysis is framed as a defense against this reconstruction threat.