Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy
Johannes Kaiser 1,2, Alexander Ziller 1, Eleni Triantafillou 3, Daniel Rückert 1,4, Georgios Kaissis 2
Published on arXiv
2601.12922
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
Collusion attack successfully increases membership inference susceptibility for 62% of targeted individuals while remaining fully compliant with and undetectable within formal iDP guarantees
(ε_i, δ_i, Δ̄)-iDP
Novel technique introduced
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice. We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms: while conforming to the iDP guarantees, an individual's privacy risk is not solely governed by their own privacy budget, but critically depends on the privacy choices of all other data contributors. This creates a mismatch between the promise of individual privacy control and the reality of a system where risk is collectively determined. We demonstrate empirically that certain distributions of privacy preferences can unintentionally inflate the privacy risk of individuals, even when their formal guarantees are met. Moreover, this excess risk provides an exploitable attack vector. A central adversary or a set of colluding adversaries can deliberately choose privacy budgets to amplify vulnerabilities of targeted individuals. Most importantly, this attack operates entirely within the guarantees of DP, hiding this excess vulnerability. Our empirical evaluation demonstrates successful attacks against 62% of targeted individuals, substantially increasing their membership inference susceptibility. To mitigate this, we propose $(\varepsilon_i,δ_i,\overlineΔ)$-iDP a privacy contract that uses $Δ$-divergences to provide users with a hard upper bound on their excess vulnerability, while offering flexibility to mechanism design. Our findings expose a fundamental challenge to the current paradigm, demanding a re-evaluation of how iDP systems are designed, audited, communicated, and deployed to make excess risks transparent and controllable.
Key Contributions
- Reveals that in sampling-based iDP, an individual's true privacy risk depends on all participants' privacy budget choices, not just their own — breaking iDP's core promise of individual control
- Demonstrates a collusion attack where a central adversary or colluding participants deliberately choose privacy budgets to inflate membership inference susceptibility of targeted individuals, achieving 62% attack success while conforming to DP guarantees
- Proposes (ε_i, δ_i, Δ̄)-iDP, a new privacy contract using Δ-divergences to provide users a hard upper bound on excess vulnerability
🛡️ Threat Analysis
Primary contribution is demonstrating that colluding adversaries can exploit iDP's sampling mechanism to substantially increase membership inference susceptibility for 62% of targeted individuals, while remaining hidden within formal DP guarantees.