attack 2026

Kraken: Higher-order EM Side-Channel Attacks on DNNs in Near and Far Field

Peter Horvath 1, Ilia Shumailov 2, Lukasz Chmielewski 3, Lejla Batina 1, Yuval Yarom 4

0 citations

α

Published on arXiv

2603.02891

Model Theft

OWASP ML Top 10 — ML05

Key Finding

Demonstrates for the first time that DNN weights can be extracted from GPU Tensor Core units via near-field EM side-channel attacks, and that LLM weight leakage is detectable at 100cm through glass in far-field settings.

Kraken

Novel technique introduced


The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the field of model stealing has emerged. Attacks based on physical side-channel information have shown that DNN model extraction is feasible, even on CUDA Cores in a GPU. For the first time, our work demonstrates parameter extraction on the specialized GPU's Tensor Core units, most commonly used GPU units nowadays due to their superior performance, via near-field physical side-channel attacks. Previous work targeted only the general-purpose CUDA Cores in the GPU, the functional units that have been part of the GPU since its inception. Our method is tailored to the GPU architecture to accurately estimate energy consumption and derive efficient attacks via Correlation Power Analysis (CPA). Furthermore, we provide an exploratory analysis of hyperparameter and weight leakage from LLMs in far field and demonstrate that the GPU's electromagnetic radiation leaks even 100 cm away through a glass obstacle.


Key Contributions

  • First EM side-channel weight extraction attack targeting GPU Tensor Core units (previously only CUDA Cores had been targeted), using Correlation Power Analysis (CPA) tailored to Tensor Core energy consumption modeling
  • Near-field attack demonstrating accurate parameter extraction from DNNs running on modern GPUs with Tensor Core acceleration
  • Exploratory far-field analysis showing LLM hyperparameter and weight leakage via EM radiation at distances up to 100cm through a glass obstacle

🛡️ Threat Analysis

Model Theft

The explicit goal is model parameter (weight) extraction — stealing the model itself via physical EM side-channel analysis. Side-channel attacks to extract model parameters are explicitly listed under ML05. The adversary recovers DNN weights and LLM hyperparameters by observing GPU electromagnetic radiation during inference, targeting IP theft of the model.


Details

Domains
visionnlp
Model Types
cnnllmtransformer
Threat Tags
physicalinference_timegrey_box
Applications
dnn model ip protectionllm weight extractiongpu-accelerated inference