attack 2026

Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage

Faiz Taleb 1,2,3, Ivan Gazeau 2,3, Maryline Laurent 1

0 citations

α

Published on arXiv

2603.24213

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

LBRM achieves AUROC 0.90 for membership inference; attribute inference achieves 0.87-0.92 recall and precision improves from 78% to 90% when targeting top 25% sequences identified by LBRM

LBRM (Loss-Based Reference Model)

Novel technique introduced


Deep learning models for time series imputation are now essential in fields such as healthcare, the Internet of Things (IoT), and finance. However, their deployment raises critical privacy concerns. Beyond the well-known issue of unintended memorization, which has been extensively studied in generative models, we demonstrate that time series models are vulnerable to inference attacks in a black-box setting. In this work, we introduce a two-stage attack framework comprising: (1) a novel membership inference attack based on a reference model that improves detection accuracy, even for models robust to overfitting-based attacks, and (2) the first attribute inference attack that predicts sensitive characteristics of the training data for timeseries imputation model. We evaluate these attacks on attention-based and autoencoder architectures in two scenarios: models that are trained from scratch, and fine-tuned models where the adversary has access to the initial weights. Our experimental results demonstrate that the proposed membership attack retrieves a significant portion of the training data with a tpr@top25% score significantly higher than a naive attack baseline. We show that our membership attack also provides a good insight of whether attribute inference will work (with a precision of 90% instead of 78% in the genral case).


Key Contributions

  • Novel Loss-Based Reference Model (LBRM) membership inference attack that achieves AUROC 0.90 by comparing target model with matched reference model
  • First attribute inference attack for time-series imputation models that recovers sensitive temporal patterns (peaks) with 0.87-0.92 recall
  • Demonstrates operational link between membership and attribute inference: LBRM identifies sequences vulnerable to attribute leakage, improving AIA precision from 78% to 90%

🛡️ Threat Analysis

Model Inversion Attack

Paper also demonstrates attribute inference attack (AIA) that reconstructs sensitive characteristics (peaks, temporal patterns) from training data through imputation model queries — this is reconstructing private training data attributes, not just detecting membership.

Membership Inference Attack

Primary contribution is a novel membership inference attack (LBRM) that determines whether specific time-series samples were in the training set, achieving AUROC of 0.90 in black-box setting.


Details

Domains
timeseries
Model Types
transformertraditional_ml
Threat Tags
black_boxinference_time
Applications
time series imputationhealthcareiotfinance