defense 2025

Quantifying Information Disclosure During Gradient Descent Using Gradient Uniqueness

Sleem Abdelghafar , Maryam Aliakbarpour , Chris Jermaine

0 citations · 35 references · arXiv (Cornell University)

α

Published on arXiv

2510.10902

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

GNQ strongly predicts sequence extractability in targeted attacks and reveals that disclosure risk concentrates heterogeneously on specific training examples over the course of LLM training

Gradient Uniqueness (GNQ) / Batch-Space Ghost GNQ (BS-Ghost GNQ)

Novel technique introduced


Disclosing private information via publication of a machine learning model is often a concern. Intuitively, publishing a learned model should be less risky than publishing a dataset. But how much risk is there? In this paper, we present a principled disclosure metric called \emph{gradient uniqueness} that is derived from an upper bound on the amount of information disclosure from publishing a learned model. Gradient uniqueness provides an intuitive way to perform privacy auditing. The mathematical derivation of gradient uniqueness is general, and does not make any assumption on the model architecture, dataset type, or the strategy of an attacker. We examine a simple defense based on monitoring gradient uniqueness, and find that it achieves privacy comparable to classical methods such as DP-SGD, while being substantially better in terms of (utility) testing accuracy.


Key Contributions

  • Gradient Uniqueness (GNQ) — an attack-agnostic, information-theoretic metric derived from an upper bound on per-datapoint information disclosure from published models via gradient descent
  • Batch-Space Ghost GNQ (BS-Ghost GNQ) — an efficient in-run algorithm that avoids forming and inverting the P×P parameter matrix, enabling GNQ computation during LLM-scale training with minimal overhead
  • Empirical validation showing GNQ strongly predicts sequence extractability in targeted extraction attacks and that a GNQ-based monitoring defense achieves privacy comparable to DP-SGD with better utility

🛡️ Threat Analysis

Model Inversion Attack

Directly quantifies how much private training data is embedded in a published model via gradient descent, validated against targeted extraction attacks — an adversary is attempting to reconstruct/extract training sequences from the model.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeblack_box
Applications
llm training privacymodel publishing risk assessmentprivacy auditing