attack 2025

GPM: The Gaussian Pancake Mechanism for Planting Undetectable Backdoors in Differential Privacy

Haochen Sun , Xi He

0 citations · 67 references · arXiv

α

Published on arXiv

2509.23834

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

GPM achieves near-perfect distinguishing attack success while remaining computationally indistinguishable from the standard Gaussian Mechanism, showing DP software can be silently backdoored to eliminate privacy guarantees.

Gaussian Pancake Mechanism (GPM)

Novel technique introduced


Differential privacy (DP) has become the gold standard for preserving individual privacy in data analysis. However, an implicit yet fundamental assumption underlying these rigorous privacy guarantees is the correct implementation and execution of DP mechanisms. Several incidents of unintended privacy loss have occurred due to numerical issues and inappropriate configurations of DP software, which have been successfully exploited in privacy attacks. To better understand the seriousness of defective DP software, we ask the following question: is it possible to elevate these passive defects into active privacy attacks while maintaining covertness? To address this question, we present the Gaussian pancake mechanism (GPM), a novel mechanism that is computationally indistinguishable from the widely used Gaussian mechanism (GM), yet exhibits arbitrarily weaker statistical DP guarantees. This unprecedented separation enables a new class of backdoor attacks: by indistinguishably passing off as the authentic GM, GPM can covertly degrade statistical privacy. Unlike the unintentional privacy loss caused by GM's numerical issues, GPM is an adversarial yet undetectable backdoor attack against data privacy. We formally prove GPM's covertness, characterize its statistical leakage, and demonstrate a concrete distinguishing attack that can achieve near-perfect success rates under suitable parameter choices, both theoretically and empirically. Our results underscore the importance of using transparent, open-source DP libraries and highlight the need for rigorous scrutiny and formal verification of DP implementations to prevent subtle, undetectable privacy compromises in real-world systems.


Key Contributions

  • Gaussian Pancake Mechanism (GPM) — a DP mechanism computationally indistinguishable from the standard Gaussian Mechanism yet providing arbitrarily weaker statistical privacy guarantees
  • Formal proof of GPM's covertness and characterization of its statistical privacy leakage
  • Concrete distinguishing attack achieving near-perfect membership inference success rate under suitable parameter choices, validated both theoretically and empirically

🛡️ Threat Analysis

Membership Inference Attack

The concrete harm enabled by GPM is a distinguishing attack that achieves near-perfect accuracy in determining whether specific individuals' data was included in the analysis — directly a membership inference attack against training data privacy.

AI Supply Chain Attacks

GPM is a malicious, computationally indistinguishable drop-in replacement for the standard Gaussian Mechanism distributed via DP libraries — a supply chain attack against the ML privacy infrastructure that users and auditors cannot detect, exploiting trust in standard DP implementations.


Details

Model Types
traditional_ml
Threat Tags
white_boxtraining_timetargeteddigital
Applications
differential privacyprivacy-preserving data analysisprivacy-preserving ml