defense 2026

Combating Data Laundering in LLM Training

Muxing Li 1, Zesheng Ye 1, Sharon Li 2, Feng Liu 1

0 citations

α

Published on arXiv

2604.01904

Membership Inference Attack

OWASP ML Top 10 — ML04

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

SDR consistently strengthens data misuse detection across diverse laundering transformations and LLM families when trained exclusively on laundered variants

Synthesis Data Reversion (SDR)

Novel technique introduced


Data rights owners can detect unauthorized data use in large language model (LLM) training by querying with proprietary samples. Often, superior performance (e.g., higher confidence or lower loss) on a sample relative to the untrained data implies it was part of the training corpus, as LLMs tend to perform better on data they have seen during training. However, this detection becomes fragile under data laundering, a practice of transforming the stylistic form of proprietary data, while preserving critical information to obfuscate data provenance. When an LLM is trained exclusively on such laundered variants, it no longer performs better on originals, erasing the signals that standard detections rely on. We counter this by inferring the unknown laundering transformation from black-box access to the target LLM and, via an auxiliary LLM, synthesizing queries that mimic the laundered data, even if rights owners have only the originals. As the search space of finding true laundering transformations is infinite, we abstract such a process into a high-level transformation goal (e.g., "lyrical rewriting") and concrete details (e.g., "with vivid imagery"), and introduce synthesis data reversion (SDR) that instantiates this abstraction. SDR first identifies the most probable goal for synthesis to narrow the search; it then iteratively refines details so that synthesized queries gradually elicit stronger detection signals from the target LLM. Evaluated on the MIMIR benchmark against diverse laundering practices and target LLM families (Pythia, Llama2, and Falcon), SDR consistently strengthens data misuse detection, providing a practical countermeasure to data laundering.


Key Contributions

  • Introduces Synthesis Data Reversion (SDR) to detect data laundering by inferring unknown transformation patterns from black-box LLM access
  • Abstracts laundering transformations into high-level goals and concrete details, iteratively refining synthesized queries to elicit detection signals
  • Demonstrates robust detection across diverse laundering practices on Pythia, Llama2, and Falcon models using MIMIR benchmark

🛡️ Threat Analysis

Membership Inference Attack

Core contribution is detecting whether proprietary data was used in LLM training (membership inference) by querying the model and comparing performance on original vs. laundered variants.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxtraining_time
Datasets
MIMIR
Applications
data rights protectioncopyright detectiontraining data auditing