FAPL-DM-BC: A Secure and Scalable FL Framework with Adaptive Privacy and Dynamic Masking, Blockchain, and XAI for the IoVs
Sathwik Narkedimilli 1, Amballa Venkata Sriram 1, Sujith Makam 1, MSVPJ Sathvik 1, Sai Prashanth Mallellu 2
Published on arXiv
2501.01063
Model Inversion Attack
OWASP ML Top 10 — ML03
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
Proposes a unified IoV FL framework combining adaptive differential privacy, dynamic gradient masking, SMPC aggregation, and blockchain validation to simultaneously defend against gradient leakage and model poisoning — no quantitative adversarial evaluation is presented in the available paper body.
FAPL-DM-BC
Novel technique introduced
The FAPL-DM-BC solution is a new FL-based privacy, security, and scalability solution for the Internet of Vehicles (IoV). It leverages Federated Adaptive Privacy-Aware Learning (FAPL) and Dynamic Masking (DM) to learn and adaptively change privacy policies in response to changing data sensitivity and state in real-time, for the optimal privacy-utility tradeoff. Secure Logging and Verification, Blockchain-based provenance and decentralized validation, and Cloud Microservices Secure Aggregation using FedAvg (Federated Averaging) and Secure Multi-Party Computation (SMPC). Two-model feedback, driven by Model-Agnostic Explainable AI (XAI), certifies local predictions and explanations to drive it to the next level of efficiency. Combining local feedback with world knowledge through a weighted mean computation, FAPL-DM-BC assures federated learning that is secure, scalable, and interpretable. Self-driving cars, traffic management, and forecasting, vehicular network cybersecurity in real-time, and smart cities are a few possible applications of this integrated, privacy-safe, and high-performance IoV platform.
Key Contributions
- Federated Adaptive Privacy-Aware Learning (FAPL) that dynamically adjusts differential privacy and gradient masking in real-time based on data sensitivity and environmental conditions
- Dynamic Masking (DM) that adaptively obfuscates gradient/logit updates to defend against gradient-driven data leakage while preserving model convergence
- Integration of blockchain-based provenance logging and SMPC-based secure aggregation to defend against model poisoning and unauthorized data access in IoV federated learning
🛡️ Threat Analysis
Blockchain-based decentralized validation and smart contracts are explicitly framed as defenses against 'model poisoning' by malicious participants in the FL process — a Byzantine/poisoning threat model mapped to ML02 in the federated learning context.
Dynamic Masking is explicitly designed to provide 'good immunity against gradient-driven data leakage' — the adversary here reconstructs training data from shared gradients in FL, which is a canonical ML03 threat. SMPC for secure aggregation also directly defends against gradient-level data reconstruction.