attack 2026

SOMP: Scalable Gradient Inversion for Large Language Models via Subspace-Guided Orthogonal Matching Pursuit

Yibo Li 1, Qiongxiu Li 2

0 citations

α

Published on arXiv

2603.16761

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Achieves 2.4x higher reconstruction fidelity than baselines at batch size 16 and recovers meaningful text even at extreme aggregation (B=128)

SOMP

Novel technique introduced


Gradient inversion attacks reveal that private training text can be reconstructed from shared gradients, posing a privacy risk to large language models (LLMs). While prior methods perform well in small-batch settings, scaling to larger batch sizes and longer sequences remains challenging due to severe signal mixing, high computational cost, and degraded fidelity. We present SOMP (Subspace-Guided Orthogonal Matching Pursuit), a scalable gradient inversion framework that casts text recovery from aggregated gradients as a sparse signal recovery problem. Our key insight is that aggregated transformer gradients retain exploitable head-wise geometric structure together with sample-level sparsity. SOMP leverages these properties to progressively narrow the search space and disentangle mixed signals without exhaustive search. Experiments across multiple LLM families, model scales, and five languages show that SOMP consistently outperforms prior methods in the aggregated-gradient regime.For long sequences at batch size B=16, SOMP achieves substantially higher reconstruction fidelity than strong baselines, while remaining computationally competitive. Even under extreme aggregation (up to B=128), SOMP still recovers meaningful text, suggesting that privacy leakage can persist in regimes where prior attacks become much less effective.


Key Contributions

  • Reformulates gradient inversion as sparse signal recovery problem exploiting head-wise geometric structure of transformer gradients
  • Scales to large batch sizes (B=16-128) and long sequences where prior methods fail
  • Achieves 2.4x improvement in reconstruction fidelity over baselines at batch size 16 while remaining computationally competitive

🛡️ Threat Analysis

Model Inversion Attack

Paper demonstrates reconstruction of private training data from shared gradients in federated learning — a model inversion attack where the adversary (honest-but-curious server) recovers text that models were trained on by exploiting gradient structure.


Details

Domains
nlpfederated-learning
Model Types
llmtransformerfederated
Threat Tags
training_timewhite_box
Datasets
GPT-2GPT-J-6BQwen3-8B
Applications
federated learningprivacy auditinggradient leakage analysis