defense 2025

When Secure Aggregation Falls Short: Achieving Long-Term Privacy in Asynchronous Federated Learning for LEO Satellite Networks

Mohamed Elmahallawy 1, Tie Luo 2

0 citations

α

Published on arXiv

2508.13425

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

LTP-FLEO prevents cross-round model inversion attacks by ensuring individual client model updates cannot be isolated from aggregates across multiple FL rounds, while maintaining competitive model accuracy and convergence.

LTP-FLEO

Novel technique introduced


Secure aggregation is a common technique in federated learning (FL) for protecting data privacy from both curious internal entities (clients or server) and external adversaries (eavesdroppers). However, in dynamic and resource-constrained environments such as low Earth orbit (LEO) satellite networks, traditional secure aggregation methods fall short in two aspects: (1) they assume continuous client availability while LEO satellite visibility is intermittent and irregular; (2) they consider privacy in each communication round but have overlooked the possible privacy leakage through multiple rounds. To address these limitations, we propose LTP-FLEO, an asynchronous FL framework that preserves long-term privacy (LTP) for LEO satellite networks. LTP-FLEO introduces (i) privacy-aware satellite partitioning, which groups satellites based on their predictable visibility to the server and enforces joint participation; (ii) model age balancing, which mitigates the adverse impact of stale model updates; and (iii) fair global aggregation, which treats satellites of different visibility durations in an equitable manner. Theoretical analysis and empirical validation demonstrate that LTP-FLEO effectively safeguards both model and data privacy across multi-round training, promotes fairness in line with satellite contributions, accelerates global convergence, and achieves competitive model accuracy.


Key Contributions

  • Privacy-aware satellite partitioning that enforces joint participation of satellite groups to prevent cross-round model isolation by a curious server
  • Model age balancing mechanism that mitigates the influence of stale local models from intermittently available LEO satellites on the global model
  • Theoretical analysis showing LTP-FLEO preserves long-term privacy (across multiple rounds) while maintaining convergence guarantees and model accuracy

🛡️ Threat Analysis

Model Inversion Attack

The adversary (curious aggregation server or eavesdropper) isolates individual client model updates by subtracting consecutive round aggregates, then applies model inversion to reconstruct private training data. LTP-FLEO defends against this by enforcing privacy-aware satellite grouping that prevents any single client model from being isolated across rounds, directly countering the gradient/model-update leakage threat in federated learning.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
grey_boxtraining_time
Applications
federated learningleo satellite networksinternet of remote things