defense 2025

The Robustness of Spiking Neural Networks in Federated Learning with Compression Against Non-omniscient Byzantine Attacks

Manh V. Nguyen , Liang Zhao , Bobin Deng , Shaoen Wu

0 citations

α

Published on arXiv

2501.03306

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Integrating Top-κ sparsification into FL-SNN training yields roughly 40% accuracy improvement under the MinMax Byzantine attack compared to non-sparsified FL-SNN baselines.

Top-κ sparsification for FL-SNN

Novel technique introduced


Spiking Neural Networks (SNNs), which offer exceptional energy efficiency for inference, and Federated Learning (FL), which offers privacy-preserving distributed training, is a rising area of interest that highly beneficial towards Internet of Things (IoT) devices. Despite this, research that tackles Byzantine attacks and bandwidth limitation in FL-SNNs, both poses significant threats on model convergence and training times, still remains largely unexplored. Going beyond proposing a solution for both of these problems, in this work we highlight the dual benefits of FL-SNNs, against non-omniscient Byzantine adversaries (ones that restrict attackers access to local clients datasets), and greater communication efficiency, over FL-ANNs. Specifically, we discovered that a simple integration of Top-\k{appa} sparsification into the FL apparatus can help leverage the advantages of the SNN models in both greatly reducing bandwidth usage and significantly boosting the robustness of FL training against non-omniscient Byzantine adversaries. Most notably, we saw a massive improvement of roughly 40% accuracy gain in FL-SNNs training under the lethal MinMax attack


Key Contributions

  • Empirical demonstration that FL-SNNs are inherently more robust than FL-ANNs against non-omniscient Byzantine adversaries
  • Integration of Top-κ gradient sparsification into FL-SNN training to simultaneously improve Byzantine robustness and communication efficiency
  • ~40% accuracy recovery under the MinMax attack when Top-κ sparsification is applied to FL-SNNs

🛡️ Threat Analysis

Data Poisoning Attack

Byzantine adversaries in federated learning manipulate local model updates (gradient poisoning) to degrade global model convergence — this is the canonical FL poisoning threat. The paper evaluates non-omniscient Byzantine attacks including MinMax and proposes Top-κ sparsification as a defense against these malicious client updates.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
grey_boxtraining_timeuntargeted
Applications
federated learningiot edge devicesspiking neural network training