Spike-PTSD: A Bio-Plausible Adversarial Example Attack on Spiking Neural Networks via PTSD-Inspired Spike Scaling
Lingxin Jin 1,2, Wei Jiang 1, Maregu Assefa Habtie 2, Letian Chen 1, Jinyu Zhan 1, Xingzhi Zhou 1, Lin Zuo 1, Naoufel Werghi 2
Published on arXiv
2604.01750
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves over 99% attack success rate across six datasets and four SNN models by exploiting bio-plausible spike dynamics
Spike-PTSD
Novel technique introduced
Spiking Neural Networks (SNNs) are energy-efficient and biologically plausible, ideal for embedded and security-critical systems, yet their adversarial robustness remains open. Existing adversarial attacks often overlook SNNs' bio-plausible dynamics. We propose Spike-PTSD, a biologically inspired adversarial attack framework modeled on abnormal neural firing in Post-Traumatic Stress Disorder (PTSD). It localizes decision-critical layers, selects neurons via hyper/hypoactivation signatures, and optimizes adversarial examples with dual objectives. Across six datasets, three encoding types, and four models, Spike-PTSD achieves over 99% success rates, systematically compromising SNN robustness. Code: https://github.com/bluefier/Spike-PTSD.
Key Contributions
- Bio-plausible adversarial attack framework inspired by PTSD neural firing abnormalities
- Layer localization and neuron selection via hyper/hypoactivation signatures
- Dual-objective optimization achieving >99% attack success across six datasets, three encoding types, and four SNN architectures
🛡️ Threat Analysis
Spike-PTSD crafts adversarial perturbations that cause SNN misclassification at inference time by manipulating spike patterns to exploit decision-critical neurons. This is a clear input manipulation attack targeting the spike-based inference process.