AdaMixup: A Dynamic Defense Framework for Membership Inference Attack Mitigation
Ying Chen 1, Jiajing Chen 2, Yijie Weng 3, ChiaHua Chang 4, Dezhi Yu 2, Guanbiao Lin 5
Published on arXiv
2501.02182
Membership Inference Attack
OWASP ML Top 10 — ML04
Key Finding
AdaMixup significantly reduces membership inference attack success rates while maintaining competitive model accuracy compared to differential privacy and static mixup baselines.
AdaMixup
Novel technique introduced
Membership inference attacks have emerged as a significant privacy concern in the training of deep learning models, where attackers can infer whether a data point was part of the training set based on the model's outputs. To address this challenge, we propose a novel defense mechanism, AdaMixup. AdaMixup employs adaptive mixup techniques to enhance the model's robustness against membership inference attacks by dynamically adjusting the mixup strategy during training. This method not only improves the model's privacy protection but also maintains high performance. Experimental results across multiple datasets demonstrate that AdaMixup significantly reduces the risk of membership inference attacks while achieving a favorable trade-off between defensive efficiency and model accuracy. This research provides an effective solution for data privacy protection and lays the groundwork for future advancements in mixup training methods.
Key Contributions
- AdaMixup: a defense that dynamically adjusts the mixup interpolation ratio during training based on model performance, rather than using a fixed ratio
- Demonstrates a favorable trade-off between MIA defense effectiveness and model accuracy across multiple datasets
🛡️ Threat Analysis
The paper's sole contribution is a defense against membership inference attacks — AdaMixup is explicitly designed to prevent adversaries from inferring whether specific data points were in the training set.