defense 2026

SPOILER: TEE-Shielded DNN Partitioning of On-Device Secure Inference with Poison Learning

Donghwa Kang 1, Hojun Choe 1, Doohyun Kim 1, Hyeongboo Baek 2, Brent ByungHoon Kang 1

0 citations

α

Published on arXiv

2603.06263

Model Theft

OWASP ML Top 10 — ML05

Key Finding

SPOILER achieves state-of-the-art security-latency-accuracy trade-offs for TEE-shielded DNN partitioning on both CNNs and Transformers deployed on edge devices.

SPOILER

Novel technique introduced


Deploying deep neural networks (DNNs) on edge devices exposes valuable intellectual property to model-stealing attacks. While TEE-shielded DNN partitioning (TSDP) mitigates this by isolating sensitive computations, existing paradigms fail to simultaneously satisfy privacy and efficiency. The training-before-partition paradigm suffers from intrinsic privacy leakage, whereas the partition-before-training paradigm incurs severe latency due to structural dependencies that hinder parallel execution. To overcome these limitations, we propose SPOILER, a novel search-before-training framework that fundamentally decouples the TEE sub-network from the backbone via hardware-aware neural architecture search (NAS). SPOILER identifies a lightweight TEE architecture strictly optimized for hardware constraints, maximizing parallel efficiency. Furthermore, we introduce self-poisoning learning to enforce logical isolation, rendering the exposed backbone functionally incoherent without the TEE component. Extensive experiments on CNNs and Transformers demonstrate that SPOILER achieves state-of-the-art trade-offs between security, latency, and accuracy.


Key Contributions

  • Search-before-training framework (SPOILER) that decouples the TEE sub-network from the backbone via hardware-aware neural architecture search, maximizing parallel execution efficiency on edge devices
  • Self-poisoning learning technique that enforces logical isolation by rendering the exposed backbone functionally incoherent without the TEE-protected component, preventing usable model extraction
  • Empirical validation on CNNs and Transformers showing state-of-the-art trade-offs among security, inference latency, and accuracy

🛡️ Threat Analysis

Model Theft

The entire framework is designed to prevent model-stealing attacks on edge devices: the TEE partitioning hides sensitive model components, and self-poisoning learning intentionally degrades the exposed backbone so that an adversary who extracts it cannot recover a functional model. This is a direct defense against model IP theft.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
white_boxinference_time
Applications
edge device inferenceon-device deep learningmodel ip protection