defense 2026

RIFLE: Robust Distillation-based FL for Deep Model Deployment on Resource-Constrained IoT Networks

Pouria Arefijamal 1, Mahdi Ahmadlou 1, Bardia Safaei 1, Jörg Henkel 2

0 citations · 20 references · arXiv (Cornell University)

α

Published on arXiv

2602.08446

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

RIFLE mitigates poisoning attacks by 62.5%, reduces false-positive detections by 87.5%, and achieves up to 28.3% higher accuracy than FL baselines under non-IID conditions in only 10 rounds

RIFLE

Novel technique introduced


Federated learning (FL) is a decentralized learning paradigm widely adopted in resource-constrained Internet of Things (IoT) environments. These devices, typically relying on TinyML models, collaboratively train global models by sharing gradients with a central server while preserving data privacy. However, as data heterogeneity and task complexity increase, TinyML models often become insufficient to capture intricate patterns, especially under extreme non-IID (non-independent and identically distributed) conditions. Moreover, ensuring robustness against malicious clients and poisoned updates remains a major challenge. Accordingly, this paper introduces RIFLE - a Robust, distillation-based Federated Learning framework that replaces gradient sharing with logit-based knowledge transfer. By leveraging a knowledge distillation aggregation scheme, RIFLE enables the training of deep models such as VGG-19 and Resnet18 within constrained IoT systems. Furthermore, a Kullback-Leibler (KL) divergence-based validation mechanism quantifies the reliability of client updates without exposing raw data, achieving high trust and privacy preservation simultaneously. Experiments on three benchmark datasets (MNIST, CIFAR-10, and CIFAR-100) under heterogeneous non-IID conditions demonstrate that RIFLE reduces false-positive detections by up to 87.5%, enhances poisoning attack mitigation by 62.5%, and achieves up to 28.3% higher accuracy compared to conventional federated learning baselines within only 10 rounds. Notably, RIFLE reduces VGG19 training time from over 600 days to just 1.39 hours on typical IoT devices (0.3 GFLOPS), making deep learning practical in resource-constrained networks.


Key Contributions

  • Replaces gradient sharing with logit-based knowledge distillation to enable deep model (VGG-19, ResNet18) training on resource-constrained IoT devices
  • KL divergence-based client update validation mechanism that detects and filters poisoned/malicious updates without accessing raw data
  • Reduces poisoning attack impact by 62.5% and false-positive detections by 87.5% while cutting VGG-19 training time from 600+ days to 1.39 hours on typical IoT hardware

🛡️ Threat Analysis

Data Poisoning Attack

Core security contribution is a KL divergence-based validation mechanism that detects and filters malicious client updates (Byzantine/poisoned gradients) in federated learning to defend against model poisoning — directly addresses training-time data/update poisoning by malicious FL participants.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
training_time
Datasets
MNISTCIFAR-10CIFAR-100
Applications
federated learning on iottinyml deploymentimage classification