benchmark 2026

Benchmarking the Energy Cost of Assurance in Neuromorphic Edge Robotics

Sylvester Kaczmarek

0 citations

α

Published on arXiv

2603.13880

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Reduces gradient-based adversarial success from 82.1% to 18.7% and temporal jitter success from 75.8% to 25.1% while maintaining 45 microjoules per inference, with counter-intuitive power reduction in defended configuration

Hierarchical Temporal Defense (HTD)

Novel technique introduced


Deploying trustworthy artificial intelligence on edge robotics imposes a difficult trade-off between high-assurance robustness and energy sustainability. Traditional defense mechanisms against adversarial attacks typically incur significant computational overhead, threatening the viability of power-constrained platforms in environments such as cislunar space. This paper quantifies the energy cost of assurance in event-driven neuromorphic systems. We benchmark the Hierarchical Temporal Defense (HTD) framework on the BrainChip Akida AKD1000 processor against a suite of adversarial temporal attacks. We demonstrate that unlike traditional deep learning defenses which often degrade efficiency significantly with increased robustness, the event-driven nature of the proposed architecture achieves a superior trade-off. The system reduces gradient-based adversarial success rates from 82.1% to 18.7% and temporal jitter success rates from 75.8% to 25.1%, while maintaining an energy consumption of approximately 45 microjoules per inference. We report a counter-intuitive reduction in dynamic power consumption in the fully defended configuration, attributed to volatility-gated plasticity mechanisms that induce higher network sparsity. These results provide empirical evidence that neuromorphic sparsity enables sustainable and high-assurance edge autonomy.


Key Contributions

  • First energy-efficiency benchmark of adversarial defenses on commercial neuromorphic hardware (BrainChip Akida AKD1000)
  • Demonstrates that neuromorphic defenses can improve robustness without energy penalty — defended configuration actually reduces power consumption due to increased sparsity
  • Hierarchical Temporal Defense (HTD) framework achieving 18.7% gradient attack success and 25.1% temporal jitter success at 45 microjoules per inference

🛡️ Threat Analysis

Input Manipulation Attack

Paper evaluates defenses against gradient-based adversarial attacks (reducing success from 82.1% to 18.7%) and temporal jitter attacks (reducing success from 75.8% to 25.1%) — these are inference-time input manipulation attacks targeting misclassification.


Details

Domains
vision
Model Types
traditional_ml
Threat Tags
inference_timedigital
Applications
edge roboticsautonomous systemsspace systems