SVDefense: Effective Defense against Gradient Inversion Attacks via Singular Value Decomposition
Chenxiang Luo 1, David K.Y. Yau 2, Qun Song 1
Published on arXiv
2510.03319
Model Inversion Attack
OWASP ML Top 10 — ML03
Key Finding
SVDefense outperforms existing gradient inversion defenses across image classification, HAR, and keyword spotting tasks, providing robust privacy protection with minimal accuracy degradation while remaining deployable on resource-constrained embedded platforms.
SVDefense
Novel technique introduced
Federated learning (FL) enables collaborative model training without sharing raw data but is vulnerable to gradient inversion attacks (GIAs), where adversaries reconstruct private data from shared gradients. Existing defenses either incur impractical computational overhead for embedded platforms or fail to achieve privacy protection and good model utility at the same time. Moreover, many defenses can be easily bypassed by adaptive adversaries who have obtained the defense details. To address these limitations, we propose SVDefense, a novel defense framework against GIAs that leverages the truncated Singular Value Decomposition (SVD) to obfuscate gradient updates. SVDefense introduces three key innovations, a Self-Adaptive Energy Threshold that adapts to client vulnerability, a Channel-Wise Weighted Approximation that selectively preserves essential gradient information for effective model training while enhancing privacy protection, and a Layer-Wise Weighted Aggregation for effective model aggregation under class imbalance. Our extensive evaluation shows that SVDefense outperforms existing defenses across multiple applications, including image classification, human activity recognition, and keyword spotting, by offering robust privacy protection with minimal impact on model accuracy. Furthermore, SVDefense is practical for deployment on various resource-constrained embedded platforms. We will make our code publicly available upon paper acceptance.
Key Contributions
- Self-Adaptive Energy Threshold that adjusts SVD truncation rank based on per-client vulnerability to gradient inversion attacks
- Channel-Wise Weighted Approximation that selectively preserves gradient information critical for model convergence while maximizing privacy protection
- Layer-Wise Weighted Aggregation that handles class imbalance across federated clients without requiring raw data sharing
🛡️ Threat Analysis
The paper directly defends against gradient inversion attacks (GIAs), where an adversary reconstructs private training data from shared gradients in federated learning. SVDefense uses truncated SVD to obfuscate gradient updates before sharing, preventing data reconstruction. This is the canonical gradient leakage / data reconstruction threat model described under ML03.