attack 2026

VidLeaks: Membership Inference Attacks Against Text-to-Video Models

Li Wang 1,2,3, Wenyu Chen 1, Ning Yu 4, Zheng Li 1,2,3, Shanqing Guo 1,2,3

0 citations · 56 references · arXiv

α

Published on arXiv

2601.11210

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

VidLeaks achieves AUC of 82.92% on AnimateDiff and 97.01% on InstructVideo in the strictest query-only black-box setting, establishing that T2V models leak substantial membership information through sparse and temporal memorization.

VidLeaks

Novel technique introduced


The proliferation of powerful Text-to-Video (T2V) models, trained on massive web-scale datasets, raises urgent concerns about copyright and privacy violations. Membership inference attacks (MIAs) provide a principled tool for auditing such risks, yet existing techniques - designed for static data like images or text - fail to capture the spatio-temporal complexities of video generation. In particular, they overlook the sparsity of memorization signals in keyframes and the instability introduced by stochastic temporal dynamics. In this paper, we conduct the first systematic study of MIAs against T2V models and introduce a novel framework VidLeaks, which probes sparse-temporal memorization through two complementary signals: 1) Spatial Reconstruction Fidelity (SRF), using a Top-K similarity to amplify spatial memorization signals from sparsely memorized keyframes, and 2) Temporal Generative Stability (TGS), which measures semantic consistency across multiple queries to capture temporal leakage. We evaluate VidLeaks under three progressively restrictive black-box settings - supervised, reference-based, and query-only. Experiments on three representative T2V models reveal severe vulnerabilities: VidLeaks achieves AUC of 82.92% on AnimateDiff and 97.01% on InstructVideo even in the strict query-only setting, posing a realistic and exploitable privacy risk. Our work provides the first concrete evidence that T2V models leak substantial membership information through both sparse and temporal memorization, establishing a foundation for auditing video generation systems and motivating the development of new defenses. Code is available at: https://zenodo.org/records/17972831.


Key Contributions

  • First systematic study of membership inference attacks against text-to-video (T2V) generative models, identifying two novel attack surfaces: sparse keyframe memorization and stochastic temporal dynamics
  • VidLeaks framework combining Spatial Reconstruction Fidelity (SRF) via Top-K similarity to amplify sparse keyframe signals, and Temporal Generative Stability (TGS) measuring semantic consistency across multiple queries
  • Empirical demonstration of severe MIA vulnerabilities across three T2V models under three progressively restrictive black-box settings (supervised, reference-based, query-only), achieving up to 97.01% AUC

🛡️ Threat Analysis

Membership Inference Attack

Core contribution is a membership inference attack (VidLeaks) that determines whether a specific video was in the training set of T2V models — the canonical ML04 binary membership question. The paper proposes two novel signals (SRF and TGS) specifically for the video generative domain and evaluates them under black-box settings, achieving AUC up to 97.01%.


Details

Domains
visiongenerative
Model Types
diffusion
Threat Tags
black_boxinference_time
Datasets
AnimateDiffInstructVideo
Applications
text-to-video generationvideo generative models