survey 2026

SoK: Challenges in Tabular Membership Inference Attacks

Cristina Pêra 1,2, Tânia Carvalho 3, Maxime Cordy 3, Luís Antunes 1,2

0 citations · 102 references · arXiv

α

Published on arXiv

2601.15874

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

MIAs show generally poor performance on tabular data, but even weak attacks successfully expose a large proportion of single-out records with unique attribute signatures.


Membership Inference Attacks (MIAs) are currently a dominant approach for evaluating privacy in machine learning applications. Despite their significance in identifying records belonging to the training dataset, several concerns remain unexplored, particularly with regard to tabular data. In this paper, first, we provide an extensive review and analysis of MIAs considering two main learning paradigms: centralized and federated learning. We extend and refine the taxonomy for both. Second, we demonstrate the efficacy of MIAs in tabular data using several attack strategies, also including defenses. Furthermore, in a federated learning scenario, we consider the threat posed by an outsider adversary, which is often neglected. Third, we demonstrate the high vulnerability of single-outs (records with a unique signature) to MIAs. Lastly, we explore how MIAs transfer across model architectures. Our results point towards a general poor performance of these attacks in tabular data which contrasts with previous state-of-the-art. Notably, even attacks with limited attack performance can still successfully expose a large portion of single-outs. Moreover, our findings suggest that using different surrogate models makes MIAs more effective.


Key Contributions

  • Extended and refined taxonomy of Membership Inference Attacks covering both centralized and federated learning paradigms for tabular data
  • Empirical evaluation showing MIAs generally underperform on tabular data, contrasting prior state-of-the-art claims, while still exposing a large fraction of single-out records
  • Analysis of outsider adversary threat in federated learning and cross-architecture MIA transferability, showing surrogate model diversity improves attack effectiveness

🛡️ Threat Analysis

Membership Inference Attack

The paper is entirely focused on Membership Inference Attacks — reviewing taxonomies, empirically evaluating attack strategies and defenses, and studying MIA transferability across model architectures specifically for tabular data in both centralized and federated learning settings.


Details

Domains
tabularfederated-learning
Model Types
traditional_mlfederated
Threat Tags
black_boxgrey_boxtraining_time
Applications
tabular data classificationfederated learning