defense 2026

Beyond Passive Aggregation: Active Auditing and Topology-Aware Defense in Decentralized Federated Learning

Sheng Pan , Niansheng Tang

0 citations

α

Published on arXiv

2603.18538

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Active auditing framework achieves competitive performance with state-of-the-art defenses in mitigating adaptive backdoors while preserving task utility

Active Auditing Framework with Topology-Aware Defense

Novel technique introduced


Decentralized Federated Learning (DFL) remains highly vulnerable to adaptive backdoor attacks designed to bypass traditional passive defense metrics. To address this limitation, we shift the defensive paradigm toward a novel active, interventional auditing framework. First, we establish a dynamical model to characterize the spatiotemporal diffusion of adversarial updates across complex graph topologies. Second, we introduce a suite of proactive auditing metrics, stochastic entropy anomaly, randomized smoothing Kullback-Leibler divergence, and activation kurtosis. These metrics utilize private probes to stress-test local models, effectively exposing latent backdoors that remain invisible to conventional static detection. Furthermore, we implement a topology-aware defense placement strategy to maximize global aggregation resilience. We provide theoretical property for the system's convergence under co-evolving attack and defense dynamics. Numeric empirical evaluations across diverse architectures demonstrate that our active framework is highly competitive with state-of-the-art defenses in mitigating stealthy, adaptive backdoors while preserving primary task utility.


Key Contributions

  • Dynamical model characterizing spatiotemporal diffusion of adversarial updates across graph topologies in DFL
  • Suite of proactive auditing metrics (stochastic entropy anomaly, randomized smoothing KL-divergence, activation kurtosis) using private probes to expose latent backdoors
  • Topology-aware defense placement strategy integrated with Multi-Armed Bandit framework for defense node allocation
  • Theoretical convergence analysis under co-evolving attack and defense dynamics

🛡️ Threat Analysis

Model Poisoning

Primary focus is defending against backdoor attacks in federated learning using novel detection metrics (stochastic entropy anomaly, randomized smoothing KL-divergence, activation kurtosis) to expose latent backdoors.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_time
Applications
decentralized federated learning