defense 2026

TADP-RME: A Trust-Adaptive Differential Privacy Framework for Enhancing Reliability of Data-Driven Systems

Labani Halder 1, Payel Sadhukhan 2, Sarbani Palit 1

0 citations

α

Published on arXiv

2604.08113

Model Inversion Attack

OWASP ML Top 10 — ML03

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Reduces inference attack success rates by up to 3.1% without significant utility degradation compared to fixed-budget differential privacy

TADP-RME

Novel technique introduced


Ensuring reliability in adversarial settings necessitates treating privacy as a foundational component of data-driven systems. While differential privacy and cryptographic protocols offer strong guarantees, existing schemes rely on a fixed privacy budget, leading to a rigid utility-privacy trade-off that fails under heterogeneous user trust. Moreover, noise-only differential privacy preserves geometric structure, which inference attacks exploit, causing privacy leakage. We propose TADP-RME (Trust-Adaptive Differential Privacy with Reverse Manifold Embedding), a framework that enhances reliability under varying levels of user trust. It introduces an inverse trust score in the range [0,1] to adaptively modulate the privacy budget, enabling smooth transitions between utility and privacy. Additionally, Reverse Manifold Embedding applies a nonlinear transformation to disrupt local geometric relationships while preserving formal differential privacy guarantees through post-processing. Theoretical and empirical results demonstrate improved privacy-utility trade-offs, reducing attack success rates by up to 3.1 percent without significant utility degradation. The framework consistently outperforms existing methods against inference attacks, providing a unified approach for reliable learning in adversarial environments.


Key Contributions

  • Trust-adaptive privacy budget allocation using inverse trust scores to enable smooth privacy-utility trade-offs under heterogeneous user trust
  • Reverse Manifold Embedding (RME) nonlinear transformation that disrupts geometric structure to defend against geometry-based inference attacks
  • Formal privacy guarantees combined with empirical structural protection against membership inference and model inversion

🛡️ Threat Analysis

Model Inversion Attack

Paper explicitly defends against model inversion attacks and evaluates resistance to data reconstruction/extraction attacks. The framework is designed to prevent inference attacks from exploiting geometric structure to recover training data.

Membership Inference Attack

Paper explicitly addresses membership inference attacks as a key threat model and evaluates the framework's effectiveness in reducing membership inference attack success rates.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
training_time
Applications
privacy-preserving machine learninghealthcare analyticsfinancial systems