benchmark 2026

Understanding Disclosure Risk in Differential Privacy with Applications to Noise Calibration and Auditing (Extended Version)

Patricia Guerra-Balboa 1, Annika Sauer 1, Héber H. Arcolezi 2,3, Thorsten Strufe 1

0 citations

α

Published on arXiv

2603.12142

Model Inversion Attack

OWASP ML Top 10 — ML03

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Reconstruction advantage yields tighter bounds than reconstruction robustness (ReRo) and does not violate claimed bounds under realistic auxiliary-knowledge assumptions, enabling more accurate DP auditing and reduced noise requirements for equivalent privacy guarantees.

Reconstruction Advantage (RAD)

Novel technique introduced


Differential Privacy (DP) is widely adopted in data management systems to enable data sharing with formal disclosure guarantees. A central systems challenge is understanding how DP noise translates into effective protection against inference attacks, since this directly determines achievable utility. Most existing analyses focus only on membership inference -- capturing only a threat -- or rely on reconstruction robustness (ReRo). However, under realistic assumptions, we show that ReRo can yield misleading risk estimates and violate claimed bounds, limiting their usefulness for principled DP calibration and auditing. This paper introduces reconstruction advantage, a unified risk metric that consistently captures risk across membership inference, attribute inference, and data reconstruction. We derive tight bounds that relate DP noise to adversarial advantage and characterize optimal adversarial strategies for arbitrary DP mechanisms and attacker knowledge. These results enable risk-driven noise calibration and provide a foundation for systematic DP auditing. We show that reconstruction advantage improves the accuracy and scope of DP auditing and enables more effective utility-privacy trade-offs in DP-enabled data management systems.


Key Contributions

  • Introduces reconstruction advantage (RAD), a unified risk metric covering membership inference, attribute inference, and data reconstruction attacks against DP mechanisms
  • Derives tight, auxiliary-knowledge-aware bounds relating DP noise parameters to adversarial advantage, showing existing ReRo bounds can be violated under realistic conditions
  • Provides closed-form optimal adversarial strategies enabling principled DP auditing and risk-driven noise calibration with improved utility-privacy trade-offs

🛡️ Threat Analysis

Model Inversion Attack

Central focus is data reconstruction attacks where adversaries attempt to recover private training data or attributes from DP-protected mechanisms; paper characterizes optimal adversary strategies for reconstruction and derives tight bounds on reconstruction advantage.

Membership Inference Attack

Membership inference attacks are explicitly subsumed as a special case of the unified reconstruction advantage framework, and prior MIA-based noise calibration is directly compared and extended.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
white_boxblack_boxtraining_time
Datasets
MNIST
Applications
differentially private data releasedp-sgd machine learning trainingdata management systemsdp auditing