defense 2025

Make Shuffling Great Again: A Side-Channel Resistant Fisher-Yates Algorithm for Protecting Neural Networks

Leonard Puškáč 1, Marek Benovič 1, Jakub Breier 2, Xiaolu Hou 1

0 citations · 27 references · VLSI · Open Access

α

Published on arXiv

2501.00798

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Proposed algorithm thwarts CPA-based weight recovery on ARM Cortex-M4 with only 0.49%–4% time overhead depending on layer size (1000 vs. 100 neurons) and 2× memory overhead for the largest layer

SCA-Secure Fisher-Yates Shuffling

Novel technique introduced


Neural network models implemented in embedded devices have been shown to be susceptible to side-channel attacks (SCAs), allowing recovery of proprietary model parameters, such as weights and biases. There are already available countermeasure methods currently used for protecting cryptographic implementations that can be tailored to protect embedded neural network models. Shuffling, a hiding-based countermeasure that randomly shuffles the order of computations, was shown to be vulnerable to SCA when the Fisher-Yates algorithm is used. In this paper, we propose a design of an SCA-secure version of the Fisher-Yates algorithm. By integrating the masking technique for modular reduction and Blakely's method for modular multiplication, we effectively remove the vulnerability in the division operation that led to side-channel leakage in the original version of the algorithm. We experimentally evaluate that the countermeasure is effective against SCA by implementing a correlation power analysis attack on an embedded neural network model implemented on ARM Cortex-M4. Compared to the original proposal, the memory overhead is $2\times$ the biggest layer of the network, while the time overhead varies from $4\%$ to $0.49\%$ for a layer with $100$ and $1000$ neurons, respectively.


Key Contributions

  • SCA-secure version of the Fisher-Yates algorithm using masked modular reduction and Blakely's method for modular multiplication to eliminate the division-operation vulnerability
  • Experimental validation via correlation power analysis (CPA) on ARM Cortex-M4 showing the countermeasure successfully thwarts parameter recovery
  • Lightweight overhead profile: 0.49%–4% timing cost and 2× memory for the largest layer, making it viable for resource-constrained embedded systems

🛡️ Threat Analysis

Model Theft

Correlation power analysis attacks on embedded NN implementations recover proprietary model weights and biases (model IP theft); the paper defends against this with an SCA-secure shuffling algorithm that removes the division-operation side-channel leakage enabling parameter recovery.


Details

Model Types
traditional_ml
Threat Tags
grey_boxinference_time
Applications
embedded neural networksiot devicesresource-constrained microcontrollers