defense 2025

Targeted Attacks and Defenses for Distributed Federated Learning in Vehicular Networks

Utku Demir 1, Tugba Erpek 1, Yalin E. Sagduyu 1, Sastry Kompella 1, Mengran Xue 2

0 citations · 22 references · IEEE Military Communications C...

α

Published on arXiv

2510.15109

Data Poisoning Attack

OWASP ML Top 10 — ML02

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Backdoor attacks cause more powerful damage than targeted poisoning with fewer poisoned samples, but DFL is more robust than individual learning — requiring adversaries to attack more resources for equivalent impact; proposed clustering and statistical defenses improve stealthy attack detection.


In emerging networked systems, mobile edge devices such as ground vehicles and unmanned aerial system (UAS) swarms collectively aggregate vast amounts of data to make machine learning decisions such as threat detection in remote, dynamic, and infrastructure-constrained environments where power and bandwidth are scarce. Federated learning (FL) addresses these constraints and privacy concerns by enabling nodes to share local model weights for deep neural networks instead of raw data, facilitating more reliable decision-making than individual learning. However, conventional FL relies on a central server to coordinate model updates in each learning round, which imposes significant computational burdens on the central node and may not be feasible due to the connectivity constraints. By eliminating dependence on a central server, distributed federated learning (DFL) offers scalability, resilience to node failures, learning robustness, and more effective defense strategies. Despite these advantages, DFL remains vulnerable to increasingly advanced and stealthy cyberattacks. In this paper, we design sophisticated targeted training data poisoning and backdoor (Trojan) attacks, and characterize the emerging vulnerabilities in a vehicular network. We analyze how DFL provides resilience against such attacks compared to individual learning and present effective defense mechanisms to further strengthen DFL against the emerging cyber threats.


Key Contributions

  • Design of targeted data poisoning and backdoor (Trojan) attacks tailored to distributed federated learning in vehicular networks, with smart feature selection to maximize effectiveness
  • Characterization of DFL's inherent resilience against adversaries compared to individual learning, showing adversaries must compromise more resources to achieve equivalent damage
  • Defense mechanisms based on clustering and statistical analysis that improve detection of stealthy poisoning and backdoor attacks in DFL, with runtime evaluation

🛡️ Threat Analysis

Data Poisoning Attack

Paper explicitly designs targeted training data poisoning attacks in DFL where adversaries inject malicious data (feature/label manipulation) to degrade the global model for specific inputs while maintaining normal performance elsewhere.

Model Poisoning

Paper explicitly designs backdoor (Trojan) attacks embedding hidden triggers in DFL training data so that the model behaves normally on benign inputs but misbehaves when the trigger is present — classic ML10 threat. Also proposes defenses (clustering and statistical analysis) to detect these backdoors.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_timetargeted
Applications
anomaly detectionvehicular networksthreat detectionautonomous vehicle systems