benchmark 2026

Auditing Language Model Unlearning via Information Decomposition

Anmol Goel 1,2, Alan Ritter 3, Iryna Gurevych 1,2

0 citations · 58 references · arXiv

α

Published on arXiv

2601.15111

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Residual knowledge — redundant information linearly decodable from internal LLM representations post-unlearning — directly predicts susceptibility to known adversarial data reconstruction attacks, exposing shallow unlearning as an inadequate privacy guarantee

PID Unlearning Audit

Novel technique introduced


We expose a critical limitation in current approaches to machine unlearning in language models: despite the apparent success of unlearning algorithms, information about the forgotten data remains linearly decodable from internal representations. To systematically assess this discrepancy, we introduce an interpretable, information-theoretic framework for auditing unlearning using Partial Information Decomposition (PID). By comparing model representations before and after unlearning, we decompose the mutual information with the forgotten data into distinct components, formalizing the notions of unlearned and residual knowledge. Our analysis reveals that redundant information, shared across both models, constitutes residual knowledge that persists post-unlearning and correlates with susceptibility to known adversarial reconstruction attacks. Leveraging these insights, we propose a representation-based risk score that can guide abstention on sensitive inputs at inference time, providing a practical mechanism to mitigate privacy leakage. Our work introduces a principled, representation-level audit for unlearning, offering theoretical insight and actionable tools for safer deployment of language models.


Key Contributions

  • Partial Information Decomposition (PID) framework that decomposes LLM representations into unlearned vs. residual knowledge components after unlearning, formalizing 'shallow unlearning'
  • Empirical finding that redundant information shared between pre- and post-unlearning models constitutes residual knowledge and correlates with susceptibility to adversarial reconstruction attacks
  • Representation-based risk score enabling inference-time abstention on sensitive inputs to mitigate privacy leakage

🛡️ Threat Analysis

Model Inversion Attack

Core threat model is an adversary reconstructing training data from model representations — the paper demonstrates that 'forgotten' data remains linearly decodable from internal representations and directly correlates with susceptibility to adversarial reconstruction attacks, with a risk score proposed as a defense.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxtraining_timeinference_time
Applications
language model unlearningprivacy-preserving language model deployment