defense 2025

Formalisation of Security for Federated Learning with DP and Attacker Advantage in IIIf for Satellite Swarms -- Extended Version

Florian Kammüller 1,2

0 citations · 36 references · arXiv

α

Published on arXiv

2512.06467

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Formally proves within Isabelle that differential privacy defenses against DLG (gradient leakage) attacks satisfy a quantified attacker advantage bound, providing machine-checkable security guarantees for FL systems


In distributed applications, like swarms of satellites, machine learning can be efficiently applied even on small devices by using Federated Learning (FL). This allows to reduce the learning complexity by transmitting only updates to the general model in the server in the form of differences in stochastic gradient descent. FL naturally supports differential privacy but new attacks, so called Data Leakage from Gradient (DLG) have been discovered recently. There has been work on defenses against DLG but there is a lack of foundation and rigorous evaluation of their security. In the current work, we extend existing work on a formal notion of Differential Privacy for Federated Learning distributed dynamic systems and relate it to the notion of the attacker advantage. This formalisation is carried out within the Isabelle Insider and Infrastructure framework (IIIf) allowing the machine supported verification of theory and applications within the proof assistant Isabelle. Satellite swarm systems are used as a motivating use case but also as a validation case study.


Key Contributions

  • Formal extension of FL-DP security notions within the Isabelle IIIf framework to include a rigorous notion of Attacker Advantage (Adv(Att)) and its relationship to differential privacy
  • Machine-supported verification (via Isabelle theorem prover) of security properties for FL-DP protocols, including a composition theorem for decomposing DP proofs to client gradients
  • Satellite swarm case study demonstrating how the formal meta-theory applies to a concrete distributed ML deployment scenario

🛡️ Threat Analysis

Model Inversion Attack

The paper's central threat is DLG (Data Leakage from Gradient) — an adversary reconstructing private training data from gradients shared during FL training. The paper formalizes and proves the security of DP as a defense against this gradient inversion/reconstruction attack, including explicit adversary modeling via Attacker Advantage (Adv(Att)).


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timewhite_box
Applications
federated learningsatellite swarm distributed ml