attack 2026

Exponential-Family Membership Inference: From LiRA and RMIA to BaVarIA

Rickard Brännvall

0 citations

α

Published on arXiv

2603.11799

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

BaVarIA-t achieves best AUC across all 7 shadow-model budgets (K=4 to 254), with the largest improvements over LiRA and RMIA in the practically important low-shadow-model and offline regimes.

BaVarIA

Novel technique introduced


Membership inference attacks (MIAs) are becoming standard tools for auditing the privacy of machine learning models. The leading attacks -- LiRA (Carlini et al., 2022) and RMIA (Zarifzadeh et al., 2024) -- appear to use distinct scoring strategies, while the recently proposed BASE (Lassila et al., 2025) was shown to be equivalent to RMIA, making it difficult for practitioners to choose among them. We show that all three are instances of a single exponential-family log-likelihood ratio framework, differing only in their distributional assumptions and the number of parameters estimated per data point. This unification reveals a hierarchy (BASE1-4) that connects RMIA and LiRA as endpoints of a spectrum of increasing model complexity. Within this framework, we identify variance estimation as the key bottleneck at small shadow-model budgets and propose BaVarIA, a Bayesian variance inference attack that replaces threshold-based parameter switching with conjugate normal-inverse-gamma priors. BaVarIA yields a Student-t predictive (BaVarIA-t) or a Gaussian with stabilized variance (BaVarIA-n), providing stable performance without additional hyperparameter tuning. Across 12 datasets and 7 shadow-model budgets, BaVarIA matches or improves upon LiRA and RMIA, with the largest gains in the practically important low-shadow-model and offline regimes.


Key Contributions

  • Unifying exponential-family log-likelihood ratio framework showing LiRA, RMIA, and BASE are instances of the same parametric family differing only in distributional assumptions and parameter sharing
  • BaVarIA attack using conjugate normal-inverse-gamma Bayesian priors for variance estimation, yielding a Student-t predictive (BaVarIA-t) and stabilized Gaussian (BaVarIA-n) variant without hyperparameter tuning
  • Empirical evaluation across 12 datasets and 7 shadow-model budgets showing BaVarIA-t achieves best AUC at all K, with largest gains in the low-budget and offline regimes

🛡️ Threat Analysis

Membership Inference Attack

The paper directly proposes, analyzes, and improves membership inference attacks — determining whether specific data points were in a model's training set. BaVarIA is a new MIA that leverages Bayesian variance estimation to outperform LiRA and RMIA.


Details

Domains
visiontabular
Model Types
cnntraditional_ml
Threat Tags
black_boxtraining_time
Datasets
12 image and tabular datasets (unspecified in excerpt)
Applications
machine learning model privacy auditingmembership inference