benchmark 2025

Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail

Luu Trong Nhan 1,2, Luu Trung Duong 1, Pham Ngoc Nam 2, Truong Cong Thang 3

0 citations · 68 references · arXiv

α

Published on arXiv

2509.23762

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

SNNs with natural gradient sparsity achieve state-of-the-art adversarial defense without explicit regularization, but this sparsity suppresses model expressivity, revealing a gradient-sparsity-driven accuracy-robustness trade-off analogous to that in ANNs


Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence, primarily due to their inherent energy efficiency and compact memory footprint. However, achieving adversarial robustness in SNNs, (particularly for vision-related tasks) remains a nascent and underexplored challenge. Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations. In this work, we present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization. Further analysis reveals a trade-off between robustness and generalization: while sparse gradients contribute to improved adversarial resilience, they can impair the model's ability to generalize; conversely, denser gradients support better generalization but increase vulnerability to attacks. Our findings offer new insights into the dual role of gradient sparsity in SNN training.


Key Contributions

  • Demonstrates that temporally coded SNNs exhibit natural adversarial robustness (SOTA without explicit regularization) under specific architectural configurations due to inherent gradient sparsity
  • Theoretically and empirically characterizes gradient sparsity as the shared computational mechanism linking adversarial robustness and generalization in SNNs
  • Establishes a robustness-accuracy trade-off in SNNs: sparser gradients weaken adversarial attack efficacy but reduce clean accuracy expressivity, and vice versa

🛡️ Threat Analysis

Input Manipulation Attack

Paper analyzes SNN resilience against adversarial input perturbations (evasion attacks), identifies gradient sparsity as the natural defense mechanism, and characterizes the robustness-generalization trade-off under adversarial attack settings.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_time
Applications
image classificationevent-based vision