defense 2025

Enabling Trustworthy Federated Learning via Remote Attestation for Mitigating Byzantine Threats

Chaoyu Zhang , Heng Jin , Shanghao Shi , Hexuan Yu , Sydney Johns , Y. Thomas Hou , Wenjing Lou

0 citations

α

Published on arXiv

2509.00634

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Sentinel enforces trustworthy local training in FL by cryptographically attesting client-side computation via TEE, mitigating both data and model poisoning Byzantine attacks with low overhead on IoT hardware.

Sentinel

Novel technique introduced


Federated Learning (FL) has gained significant attention for its privacy-preserving capabilities, enabling distributed devices to collaboratively train a global model without sharing raw data. However, its distributed nature forces the central server to blindly trust the local training process and aggregate uncertain model updates, making it susceptible to Byzantine attacks from malicious participants, especially in mission-critical scenarios. Detecting such attacks is challenging due to the diverse knowledge across clients, where variations in model updates may stem from benign factors, such as non-IID data, rather than adversarial behavior. Existing data-driven defenses struggle to distinguish malicious updates from natural variations, leading to high false positive rates and poor filtering performance. To address this challenge, we propose Sentinel, a remote attestation (RA)-based scheme for FL systems that regains client-side transparency and mitigates Byzantine attacks from a system security perspective. Our system employs code instrumentation to track control-flow and monitor critical variables in the local training process. Additionally, we utilize a trusted training recorder within a Trusted Execution Environment (TEE) to generate an attestation report, which is cryptographically signed and securely transmitted to the server. Upon verification, the server ensures that legitimate client training processes remain free from program behavior violation or data manipulation, allowing only trusted model updates to be aggregated into the global model. Experimental results on IoT devices demonstrate that Sentinel ensures the trustworthiness of the local training integrity with low runtime and memory overhead.


Key Contributions

  • Sentinel: a remote attestation (RA)-based defense scheme that uses TEE to cryptographically verify client-side training integrity in federated learning
  • Code instrumentation to track control-flow and monitor critical variables during local training, enabling detection of program behavior violations and data manipulation
  • Demonstrated low runtime and memory overhead on IoT devices, making the approach practical for resource-constrained mission-critical FL deployments

🛡️ Threat Analysis

Data Poisoning Attack

Directly defends against Byzantine data poisoning attacks in federated learning, where malicious clients manipulate local training data to corrupt the global model. Paper keywords explicitly list 'Data Poisoning Attacks' and the system detects data manipulation on clients.

Model Poisoning

Also defends against model poisoning / backdoor injection by Byzantine FL participants. Paper keywords explicitly list 'Model Poisoning Attacks' and the attestation scheme verifies that local model updates haven't been crafted to inject hidden malicious behavior.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timegrey_box
Applications
federated learningiot devicesmission-critical distributed training