attack 2026

Double Strike: Breaking Approximation-Based Side-Channel Countermeasures for DNNs

Lorenzo Casalino 1,2,3,4, Maria Méndez Real 5,6, Jean-Christophe Prévotet 7,8,4,9, Rubén Salvador 1,2,3,4

0 citations · 57 references · arXiv

α

Published on arXiv

2601.08698

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Exploiting a control-flow dependency in MACPRUNING, the attack recovers 96% of important DNN weights and up to 100% of non-important weights from a Chipwhisperer-Lite running a protected MLP.

Double Strike

Novel technique introduced


Deep neural networks (DNNs), which support services such as driving assistants and medical diagnoses, undergo lengthy and expensive training procedures. Therefore, the training's outcome - the DNN weights - represents a significant intellectual property asset to protect. Side-channel analysis (SCA) has recently appeared as an effective approach to recover this confidential asset from DNN implementations. In response, researchers have proposed to defend DNN implementations through classic side-channel countermeasures, at the cost of higher energy consumption, inference time, and resource utilisation. Following a different approach, Ding et al. (HOST'25) introduced MACPRUNING, a novel SCA countermeasure based on pruning, a performance-oriented Approximate Computing technique: at inference time, the implementation randomly prunes (or skips) non-important weights (i.e., with low contribution to the DNN's accuracy) of the first layer, exponentially increasing the side-channel resilience of the protected DNN implementation. However, the original security analysis of MACPRUNING did not consider a control-flow dependency intrinsic to the countermeasure design. This dependency may allow an attacker to circumvent MACPRUNING and recover the weights important to the DNN's accuracy. This paper describes a preprocessing methodology to exploit the above-mentioned control-flow dependency. Through practical experiments on a Chipwhisperer-Lite running a MACPRUNING-protected Multi-Layer Perceptron, we target the first 8 weights of each neuron and recover 96% of the important weights, demonstrating the drastic reduction in security of the protected implementation. Moreover, we show how microarchitectural leakage improves the effectiveness of our methodology, even allowing for the recovery of up to 100% of the targeted non-important weights. Lastly, by adapting our methodology [continue in pdf].


Key Contributions

  • Identifies an exploitable control-flow dependency in MACPRUNING, an approximate-computing-based SCA countermeasure for DNN weight protection
  • Proposes a preprocessing methodology that leverages this dependency to recover 96% of important DNN weights from a MACPRUNING-protected MLP on Chipwhisperer-Lite hardware
  • Demonstrates that microarchitectural leakage further enables recovery of up to 100% of non-important weights, and shows the vulnerability is fundamental to the pruning mechanism itself

🛡️ Threat Analysis

Model Theft

The paper proposes and demonstrates a side-channel analysis methodology to extract DNN model weights from a physical device — a direct model theft attack. It explicitly frames DNN weights as IP assets and the attack's goal is recovering those weights from a MACPRUNING-protected MLP on a Chipwhisperer-Lite, which falls squarely under 'side-channel attacks to extract model parameters' in ML05.


Details

Model Types
traditional_ml
Threat Tags
grey_boxinference_timetargetedphysical
Applications
embedded dnn inferenceedge ai hardwarednn intellectual property protection