attack 2026

On the Evidentiary Limits of Membership Inference for Copyright Auditing

Murat Bilgehan Ertan 1,2, Emirhan Böge 3, Min Chen 2, Kaleel Mahmood 4, Marten van Dijk 1,2

0 citations · 57 references · arXiv

α

Published on arXiv

2601.12937

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

State-of-the-art MIAs degrade when LLMs are fine-tuned on SAGE-generated paraphrases, demonstrating that MIA signals are not robust to semantics-preserving lexical transformations and are insufficient as standalone evidentiary mechanisms.

SAGE (Structure-Aware SAE-Guided Extraction)

Novel technique introduced


As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.


Key Contributions

  • SAGE (Structure-Aware SAE-Guided Extraction): a Sparse Autoencoder-guided paraphrasing framework that rewrites training data to alter lexical structure while preserving semantic content and downstream utility, defeating MIA-based detection
  • A formal judge-prosecutor-accused communication protocol that captures the adversarial evidentiary structure of copyright disputes involving LLM training data
  • Empirical demonstration that state-of-the-art MIAs degrade significantly when LLMs are fine-tuned on SAGE-generated paraphrases, showing MIAs are insufficient as standalone copyright auditing evidence

🛡️ Threat Analysis

Membership Inference Attack

The paper directly addresses membership inference attacks — their reliability, evidentiary limits, and evasion. SAGE is a novel method designed to defeat MIAs by training on semantics-preserving paraphrases, demonstrating that state-of-the-art MIAs are brittle under adversarial obfuscation of training data.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_timeblack_box
Applications
llm copyright auditingmembership inference