attack 2026

Sequential Membership Inference Attacks

Thomas Michel 1,2,3,4, Debabrota Basu 1,2,3,4, Emilie Kaufmann 1,2,3,4

0 citations · 33 references · arXiv (Cornell University)

α

Published on arXiv

2602.16596

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Practical variants of SeMI* yield tighter lower bounds on privacy loss than baseline MIAs across multiple data distributions and models trained or fine-tuned with DP-SGD, by exploiting the full sequence of model updates.

SeMI*

Novel technique introduced


Modern AI models are not static. They go through multiple updates in their lifecycles. Thus, exploiting the model dynamics to create stronger Membership Inference (MI) attacks and tighter privacy audits are timely questions. Though the literature empirically shows that using a sequence of model updates can increase the power of MI attacks, rigorous analysis of the `optimal' MI attacks is limited to static models with infinite samples. Hence, we develop an `optimal' MI attack, SeMI*, that uses the sequence of model updates to identify the presence of a target inserted at a certain update step. For the empirical mean computation, we derive the optimal power of SeMI*, while accessing a finite number of samples with or without privacy. Our results retrieve the existing asymptotic analysis. We observe that having access to the model sequence avoids the dilution of MI signals unlike the existing attacks on the final model, where the MI signal vanishes as training data accumulates. Furthermore, an adversary can use SeMI* to tune both the insertion time and the canary to yield tighter privacy audits. Finally, we conduct experiments across data distributions and models trained or fine-tuned with DP-SGD demonstrating that practical variants of SeMI* lead to tighter privacy audits than the baselines.


Key Contributions

  • Derives SeMI*, a theoretically optimal membership inference attack that leverages the full sequence of model checkpoints rather than only the final model, with finite-sample guarantees with and without DP.
  • Shows that accessing model update sequences prevents MI signal dilution — a key failure mode of attacks on the final model as training data accumulates.
  • Demonstrates that an adversary can use SeMI* to jointly optimize canary insertion time and canary design, achieving tighter empirical privacy audits than baselines on DP-SGD-trained models.

🛡️ Threat Analysis

Membership Inference Attack

The entire paper is about membership inference attacks — specifically, determining whether a target data point was inserted at a particular update step of a sequentially updated model. SeMI* is a novel, theoretically optimal MIA that outperforms existing static-model MIAs and is applied to privacy auditing of DP-SGD-trained models.


Details

Model Types
traditional_mltransformer
Threat Tags
grey_boxtraining_timetargeted
Applications
privacy auditingdifferential privacy evaluationml model lifecycle security