defense 2026

TinyGuard:A lightweight Byzantine Defense for Resource-Constrained Federated Learning via Statistical Update Fingerprints

Ali Mahdavi 1, Sana Aghapour 2, Azadeh Zamanifar 1, Amirfarhad Farhadi 3

0 citations · 26 references · arXiv (Cornell University)

α

Published on arXiv

2602.02615

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Achieves up to 95–97% test accuracy under sign-flipping, scaling, noise injection, and label poisoning attacks with stable detection precision ~0.80 across 50–150 clients, while reducing computational overhead from O(n²d) to O(nd)

TinyGuard

Novel technique introduced


Existing Byzantine robust aggregation mechanisms typically rely on fulldimensional gradi ent comparisons or pairwise distance computations, resulting in computational overhead that limits applicability in large scale and resource constrained federated systems. This paper proposes TinyGuard, a lightweight Byzantine defense that augments the standard FedAvg algorithm via statistical update f ingerprinting. Instead of operating directly on high-dimensional gradients, TinyGuard extracts compact statistical fingerprints cap turing key behavioral properties of client updates, including norm statistics, layer-wise ratios, sparsity measures, and low-order mo ments. Byzantine clients are identified by measuring robust sta tistical deviations in this low-dimensional fingerprint space with nd complexity, without modifying the underlying optimization procedure. Extensive experiments on MNIST, Fashion-MNIST, ViT-Lite, and ViT-Small with LoRA adapters demonstrate that TinyGuard pre serves FedAvg convergence in benign settings and achieves up to 95 percent accuracy under multiple Byzantine attack scenarios, including sign-flipping, scaling, noise injection, and label poisoning. Against adaptive white-box adversaries, Pareto frontier analysis across four orders of magnitude confirms that attackers cannot simultaneously evade detection and achieve effective poisoning, features we term statistical handcuffs. Ablation studies validate stable detection precision 0.8 across varying client counts (50-150), threshold parameters and extreme data heterogeneity . The proposed framework is architecture-agnostic and well-suited for federated fine-tuning of foundation models where traditional Byzantine defenses become impractical


Key Contributions

  • Statistical fingerprinting mechanism that compresses high-dimensional client gradients into compact low-dimensional feature vectors capturing norm statistics, layer-wise ratios, sparsity, and low-order moments
  • FedAvg-compatible Byzantine detection strategy with O(nd) complexity — linear in clients and parameters — versus O(n²d) for classical robust aggregation methods
  • Pareto frontier analysis showing adaptive white-box attackers face 'statistical handcuffs': they cannot simultaneously evade detection and achieve effective poisoning

🛡️ Threat Analysis

Data Poisoning Attack

TinyGuard directly defends against Byzantine attacks in FL — malicious clients submitting corrupted updates (sign-flipping, scaling, noise injection, label poisoning) to degrade global model performance. Byzantine-fault-tolerant FL aggregation defenses are explicitly listed under ML02.


Details

Domains
federated-learning
Model Types
cnntransformerfederated
Threat Tags
white_boxtraining_timeuntargeted
Datasets
MNISTFashion-MNISTViT-LiteViT-Small with LoRA
Applications
federated learningfederated fine-tuning of foundation models