defense 2025

Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks

Theodoros Tsiolakis , Vasilis Perifanis , Nikolaos Pavlidis , Christos Chrysanthos Nikolaidis , Aristeidis Sidiropoulos , Pavlos S. Efraimidis

0 citations

α

Published on arXiv

2508.16150

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

SCRUB provides the most balanced defense against MIAs with minimal collateral damage, while Negative Gradient degrades membership signals indiscriminately and SFTC's divergence effect can reinforce MIA vulnerability on retained data.


Membership Inference Attacks (MIAs) pose a significant privacy risk by enabling adversaries to determine if a specific data point was part of a model's training set. This work empirically investigates whether MU algorithms can function as a targeted, active defense mechanism, in scenarios where a privacy audit identifies specific classes or individuals as highly susceptible to MIAs post-training. By 'dulling' the model's categorical memory of these samples, the process effectively mitigates the membership signal and reduces the MIA success rate for the most vulnerable users. We evaluate the defense potential of three MU algorithms, Negative Gradient (neg grad), SCalable Remembering and Unlearning unBound (SCRUB), and Selective Fine-tuning and Targeted Confusion (SFTC), across four diverse datasets and three complexity-based model groups. Our findings reveal that MU can function as a countermeasure against MIAs, though its success is critically contingent on algorithm choice, model capacity, and a profound sensitivity to learning rates. While Negative Gradient often induces a generalized degradation of membership signals across both forget and retain set, SFTC identifies a critical ``divergence effect'' where targeted forgetting reinforces the membership signal of retained data. Conversely, SCRUB provides a more balanced defense with minimal collateral impact on MIA perspective.


Key Contributions

  • Frames machine unlearning as an active, targeted post-training defense against membership inference attacks for high-risk samples identified via privacy auditing
  • Identifies a 'divergence effect' in SFTC where targeted forgetting paradoxically reinforces membership signals for retained data
  • Demonstrates that SCRUB provides the most balanced MIA defense with minimal collateral impact, while effectiveness is critically sensitive to learning rate and model capacity

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is empirically evaluating machine unlearning (Negative Gradient, SCRUB, SFTC) as a defense mechanism specifically against membership inference attacks, measuring changes in MIA success rates for vulnerable samples.


Details

Domains
visiontabular
Model Types
cnntraditional_ml
Threat Tags
black_boxinference_timetargeted
Datasets
CIFAR-10CIFAR-100
Applications
image classificationtabular data classification