attack 2025

Neural Breadcrumbs: Membership Inference Attacks on LLMs Through Hidden State and Attention Pattern Analysis

Disha Makhija , Manoj Ghuhan Arivazhagan , Vinayshekhar Bannihatti Kumar , Rashmi Gangadharaiah

0 citations

α

Published on arXiv

2509.05449

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Analyzing internal transformer representations (hidden states, attention patterns) yields average AUC of 0.85 for membership inference against LLMs, demonstrating privacy leakage even when output-based signals appear protected.

memTrace

Novel technique introduced


Membership inference attacks (MIAs) reveal whether specific data was used to train machine learning models, serving as important tools for privacy auditing and compliance assessment. Recent studies have reported that MIAs perform only marginally better than random guessing against large language models, suggesting that modern pre-training approaches with massive datasets may be free from privacy leakage risks. Our work offers a complementary perspective to these findings by exploring how examining LLMs' internal representations, rather than just their outputs, may provide additional insights into potential membership inference signals. Our framework, \emph{memTrace}, follows what we call \enquote{neural breadcrumbs} extracting informative signals from transformer hidden states and attention patterns as they process candidate sequences. By analyzing layer-wise representation dynamics, attention distribution characteristics, and cross-layer transition patterns, we detect potential memorization fingerprints that traditional loss-based approaches may not capture. This approach yields strong membership detection across several model families achieving average AUC scores of 0.85 on popular MIA benchmarks. Our findings suggest that internal model behaviors can reveal aspects of training data exposure even when output-based signals appear protected, highlighting the need for further research into membership privacy and the development of more robust privacy-preserving training techniques for large language models.


Key Contributions

  • memTrace framework that extracts membership signals from transformer hidden states and attention patterns ('neural breadcrumbs') rather than model outputs alone
  • Analysis of layer-wise representation dynamics, attention distribution characteristics, and cross-layer transition patterns as memorization fingerprints
  • Achieves average AUC of 0.85 on popular MIA benchmarks across several LLM families, outperforming traditional loss-based approaches

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is a membership inference attack framework that determines whether specific data points were used to train LLMs — the canonical ML04 threat. The novelty is using internal representations (hidden states, attention patterns) rather than loss-based output signals, but the attack goal remains the binary membership question.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxinference_timetargeted
Datasets
popular MIA benchmarks (unnamed in available text)
Applications
large language model privacy auditingmembership inference on llms