benchmark 2026

The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models

Jakub Breier 1, Štefan Kučerák 2, Xiaolu Hou 2,3

0 citations · 39 references · arXiv (Cornell University)

α

Published on arXiv

2602.16309

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Floating-point weight representations (FP32, FP16) exhibit near-total accuracy degradation after a single EMFI fault, while INT8 weights on VGG-11 maintain ~70% Top-1 and ~90% Top-5 accuracy under identical attack conditions.

EMFI sensitivity analysis

Novel technique introduced


Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. Especially, when considering the the 8-bit representation on a relatively large network (VGG-11), the Top-1 accuracies stay at around 70% and the Top-5 at around 90%.


Key Contributions

  • First systematic comparison of four numeric weight formats (FP32, FP16, INT8, INT4) under real EMFI attacks on embedded neural networks
  • Empirical finding that floating-point representations suffer near-complete accuracy collapse after a single fault injection, while INT8 on larger networks (VGG-11) retains ~70% Top-1 and ~90% Top-5 accuracy
  • Low-cost EMFI attack campaign across four CNN architectures (ResNet-18/34/50, VGG-11) deployed on embedded memory chips using a NewAE ChipSHOUTER platform

🛡️ Threat Analysis

Input Manipulation Attack

EMFI attacks cause bit flips in model weights during inference, producing misclassification — an inference-time attack that degrades model output correctness. While the mechanism is physical hardware fault injection rather than crafted adversarial inputs, the threat model (causing misclassification at inference time) and evaluation metric (Top-1/Top-5 accuracy degradation) align most closely with ML01.


Details

Domains
vision
Model Types
cnn
Threat Tags
inference_timephysicalwhite_box
Datasets
ImageNet-1K
Applications
embedded neural networksedge aitinymlimage classification