α

Published on arXiv

2601.16589

Input Manipulation Attack

OWASP ML Top 10 — ML01

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Identifies hardware trojans, adversarial manipulations, and side-channel attacks as the primary security threats to neuromorphic systems, with current defenses remaining immature and largely unstandardized.


Neuromorphic computing mimics brain-inspired mechanisms through spiking neurons and energy-efficient processing, offering a pathway to efficient in-memory computing (IMC). However, these advancements raise critical security and privacy concerns. As the adoption of bio-inspired architectures and memristive devices increases, so does the urgency to assess the vulnerability of these emerging technologies to hardware and software attacks. Emerging architectures introduce new attack surfaces, particularly due to asynchronous, event-driven processing and stochastic device behavior. The integration of memristors into neuromorphic hardware and software implementations in spiking neural networks offers diverse possibilities for advanced computing architectures, including their role in security-aware applications. This survey systematically analyzes the security landscape of neuromorphic systems, covering attack methodologies, side-channel vulnerabilities, and countermeasures. We focus on both hardware and software concerns relevant to spiking neural networks (SNNs) and hardware primitives, such as Physical Unclonable Functions (PUFs) and True Random Number Generators (TRNGs) for cryptographic and secure computation applications. We approach this analysis from diverse perspectives, from attack methodologies to countermeasure strategies that integrate efficiency and protection in brain-inspired hardware. This review not only maps the current landscape of security threats but provides a foundation for developing secure and trustworthy neuromorphic architectures.


Key Contributions

  • Systematic analysis of the security landscape of neuromorphic systems covering adversarial attacks, side-channel vulnerabilities, and hardware trojans in spiking neural networks
  • Review of hardware primitives (PUFs, TRNGs) and their role in security-aware neuromorphic architectures
  • Mapping of countermeasure strategies including data obfuscation, adversarial training, and cryptographic defenses for SNNs and memristive hardware

🛡️ Threat Analysis

Input Manipulation Attack

Survey explicitly covers adversarial manipulation attacks targeting spiking neural networks — input-level evasion attacks on SNN-based ML models at inference time.

Model Poisoning

Survey covers hardware Trojans embedded in neuromorphic hardware and SNN models, and countermeasures including neural cleanse and adversarial training — matching the backdoor/trojan threat model of ML10.


Details

Model Types
traditional_ml
Threat Tags
training_timeinference_timewhite_boxphysical
Applications
neuromorphic computingspiking neural networksin-memory computingedge ai