Timestep-Compressed Attack on Spiking Neural Networks through Timestep-Level Backpropagation
Donghwa Kang 1, Doohyun Kim 1, Sang-Ki Ko 2, Jinkyu Lee 3, Hyeongboo Baek 2, Brent ByungHoon Kang 1
Published on arXiv
2508.13812
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
TCA reduces required attack latency by up to 56.6% (white-box) and 57.1% (black-box) compared to SOTA SNN adversarial attacks on VGG-11 and ResNet-17, while maintaining comparable attack success rates.
TCA (Timestep-Compressed Attack)
Novel technique introduced
State-of-the-art (SOTA) gradient-based adversarial attacks on spiking neural networks (SNNs), which largely rely on extending FGSM and PGD frameworks, face a critical limitation: substantial attack latency from multi-timestep processing, rendering them infeasible for practical real-time applications. This inefficiency stems from their design as direct extensions of ANN paradigms, which fail to exploit key SNN properties. In this paper, we propose the timestep-compressed attack (TCA), a novel framework that significantly reduces attack latency. TCA introduces two components founded on key insights into SNN behavior. First, timestep-level backpropagation (TLBP) is based on our finding that global temporal information in backpropagation to generate perturbations is not critical for an attack's success, enabling per-timestep evaluation for early stopping. Second, adversarial membrane potential reuse (A-MPR) is motivated by the observation that initial timesteps are inefficiently spent accumulating membrane potential, a warm-up phase that can be pre-calculated and reused. Our experiments on VGG-11 and ResNet-17 with the CIFAR-10/100 and CIFAR10-DVS datasets show that TCA significantly reduces the required attack latency by up to 56.6% and 57.1% compared to SOTA methods in white-box and black-box settings, respectively, while maintaining a comparable attack success rate.
Key Contributions
- Timestep-Compressed Attack (TCA) framework that reduces adversarial attack latency on SNNs by up to 56.6–57.1% over SOTA while preserving comparable attack success rates
- Timestep-Level Backpropagation (TLBP): per-timestep gradient evaluation with early stopping, exploiting the finding that full temporal backpropagation is not necessary for effective SNN attacks
- Adversarial Membrane Potential Reuse (A-MPR): pre-computes and reuses membrane potential from the warm-up phase, eliminating the inefficient initial timestep accumulation overhead
🛡️ Threat Analysis
Proposes a novel gradient-based adversarial perturbation attack (extending FGSM/PGD) that causes misclassification at inference time on SNN image classifiers. Primary contribution is the attack method itself, targeting both white-box and black-box settings.