defense 2025

Online Decentralized Federated Multi-task Learning With Trustworthiness in Cyber-Physical Systems

Olusola Odeyomi 1, Sofiat Olaosebikan 2, Ajibuwa Opeyemi , Oluwadoyinsola Ige

0 citations

α

Published on arXiv

2509.00992

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

The proposed algorithm achieves performance close to a Byzantine-free baseline even when Byzantine clients outnumber honest clients, a regime where all prior statistical defenses fail.


Multi-task learning is an effective way to address the challenge of model personalization caused by high data heterogeneity in federated learning. However, extending multi-task learning to the online decentralized federated learning setting is yet to be explored. The online decentralized federated learning setting considers many real-world applications of federated learning, such as autonomous systems, where clients communicate peer-to-peer and the data distribution of each client is time-varying. A more serious problem in real-world applications of federated learning is the presence of Byzantine clients. Byzantine-resilient approaches used in federated learning work only when the number of Byzantine clients is less than one-half the total number of clients. Yet, it is difficult to put a limit on the number of Byzantine clients within a system in reality. However, recent work in robotics shows that it is possible to exploit cyber-physical properties of a system to predict clients' behavior and assign a trust probability to received signals. This can help to achieve resiliency in the presence of a dominating number of Byzantine clients. Therefore, in this paper, we develop an online decentralized federated multi-task learning algorithm to provide model personalization and resiliency when the number of Byzantine clients dominates the number of honest clients. Our proposed algorithm leverages cyber-physical properties, such as the received signal strength in wireless systems or side information, to assign a trust probability to local models received from neighbors in each iteration. Our simulation results show that the proposed algorithm performs close to a Byzantine-free setting.


Key Contributions

  • First online decentralized federated multi-task learning algorithm robust to a dominating number of Byzantine clients
  • Leverages cyber-physical properties (e.g., received signal strength) to assign per-neighbor trust probabilities, bypassing the classical <50% Byzantine limit
  • Formulates the problem as a constrained regularized Lagrangian optimization with convergence analysis in an online, time-varying data setting

🛡️ Threat Analysis

Data Poisoning Attack

Byzantine clients in federated learning send arbitrary manipulated model updates to degrade global model convergence — this is a training-time data/model poisoning attack. The paper proposes a defense that assigns trust probabilities to received model updates using cyber-physical properties (e.g., received signal strength), and explicitly targets the hard case where Byzantine clients outnumber honest clients.


Details

Domains
federated-learning
Model Types
federatedtraditional_ml
Threat Tags
training_timegrey_boxuntargeted
Applications
wireless federated learningautonomous systemscyber-physical systems