defense 2025

Hi-SAFE: Hierarchical Secure Aggregation for Lightweight Federated Learning

Hyeong-Gun Joo , Songnam Hong , Seunghwan Lee , Dong-Joon Shin

0 citations · 45 references · arXiv

α

Published on arXiv

2511.18887

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Hi-SAFE reduces per-user communication by over 94% at n≥24 and total communication cost by up to 52% at n=24 while cryptographically preventing gradient inference attacks under the semi-honest model.

Hi-SAFE

Novel technique introduced


Federated learning (FL) faces challenges in ensuring both privacy and communication efficiency, particularly in resource-constrained environments such as Internet of Things (IoT) and edge networks. While sign-based methods, such as sign stochastic gradient descent with majority voting (SIGNSGD-MV), offer substantial bandwidth savings, they remain vulnerable to inference attacks due to exposure of gradient signs. Existing secure aggregation techniques are either incompatible with sign-based methods or incur prohibitive overhead. To address these limitations, we propose Hi-SAFE, a lightweight and cryptographically secure aggregation framework for sign-based FL. Our core contribution is the construction of efficient majority vote polynomials for SIGNSGD-MV, derived from Fermat's Little Theorem. This formulation represents the majority vote as a low-degree polynomial over a finite field, enabling secure evaluation that hides intermediate values and reveals only the final result. We further introduce a hierarchical subgrouping strategy that ensures constant multiplicative depth and bounded per-user complexity, independent of the number of users n.


Key Contributions

  • Majority vote polynomial construction via Fermat's Little Theorem enabling cryptographically secure evaluation over finite fields that reveals only the final aggregation result
  • Hierarchical subgrouping strategy ensuring constant multiplicative depth and O(1) per-user complexity independent of the number of clients
  • Over 94% reduction in per-user communication cost at n≥24 while preserving model accuracy and providing end-to-end privacy for sign-based FL

🛡️ Threat Analysis

Model Inversion Attack

Hi-SAFE explicitly defends against gradient leakage attacks in federated learning: the semi-honest server is the adversary who could reconstruct private training data from exposed gradient signs. The protocol hides all intermediate gradient values and reveals only the final majority vote, directly addressing the gradient reconstruction threat model.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timewhite_box
Applications
federated learningiotedge networks