attack 2025

Non-Linear Trajectory Modeling for Multi-Step Gradient Inversion Attacks in Federated Learning

Li Xia 1, Jing Yu 1, Zheng Liu 1, Sili Huang 1, Wei Tang 2, Xuan Liu 1

2 citations · 48 references · arXiv

α

Published on arXiv

2509.22082

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

NL-SME achieves 94%–98% performance improvements over linear SME baselines and order-of-magnitude better cosine similarity on CIFAR-100 and FEMNIST gradient inversion tasks.

NL-SME (Non-Linear Surrogate Model Extension)

Novel technique introduced


Federated Learning (FL) enables collaborative training while preserving privacy, yet Gradient Inversion Attacks (GIAs) pose severe threats by reconstructing private data from shared gradients. In realistic FedAvg scenarios with multi-step updates, existing surrogate methods like SME rely on linear interpolation to approximate client trajectories for privacy leakage. However, we demonstrate that linear assumptions fundamentally underestimate SGD's nonlinear complexity, encountering irreducible approximation barriers in non-convex landscapes with only one-dimensional expressiveness. We propose Non-Linear Surrogate Model Extension (NL-SME), the first framework introducing learnable quadratic Bézier curves for trajectory modeling in GIAs against FL. NL-SME leverages $|w|+1$-dimensional control point parameterization combined with dvec scaling and regularization mechanisms to achieve superior approximation accuracy. Extensive experiments on CIFAR-100 and FEMNIST demonstrate NL-SME significantly outperforms baselines across all metrics, achieving 94\%--98\% performance gaps and order-of-magnitude improvements in cosine similarity loss while maintaining computational efficiency. This work exposes critical privacy vulnerabilities in FL's multi-step paradigm and provides insights for robust defense development.


Key Contributions

  • Theoretical analysis showing linear interpolation (SME) has an irreducible approximation error bound proportional to SGD trajectory curvature in non-convex landscapes
  • NL-SME: the first gradient inversion attack framework using learnable quadratic Bézier curves for multi-step client trajectory modeling, providing |w|+1-dimensional expressiveness vs. linear's 1-dimensional
  • Empirical demonstration of 94%–98% performance improvements over SME baselines on CIFAR-100 and FEMNIST with order-of-magnitude better cosine similarity loss

🛡️ Threat Analysis

Model Inversion Attack

Core contribution is a gradient inversion attack that reconstructs private client training data from shared gradients in FedAvg — the canonical ML03 threat. NL-SME improves the surrogate trajectory modeling step that enables data reconstruction, directly exploiting gradient leakage in federated learning.


Details

Domains
federated-learningvision
Model Types
federatedcnn
Threat Tags
white_boxtraining_time
Datasets
CIFAR-100FEMNIST
Applications
federated learningprivacy-preserving collaborative training