attack 2026

Understanding Empirical Unlearning with Combinatorial Interpretability

Shingo Kodama 1, Niv Cohen 2, Micah Adler 3, Nir Shavit 3,4

0 citations · 26 references · arXiv (Cornell University)

α

Published on arXiv

2602.19215

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Gradient Ascent unlearning recovers to 80% TPR on erased clauses in a median of 7 fine-tuning steps, confirming that erased knowledge persists structurally in model weights despite suppressed expression.

Combinatorial Interpretability for Unlearning Analysis

Novel technique introduced


While many recent methods aim to unlearn or remove knowledge from pretrained models, seemingly erased knowledge often persists and can be recovered in various ways. Because large foundation models are far from interpretable, understanding whether and how such knowledge persists remains a significant challenge. To address this, we turn to the recently developed framework of combinatorial interpretability. This framework, designed for two-layer neural networks, enables direct inspection of the knowledge encoded in the model weights. We reproduce baseline unlearning methods within the combinatorial interpretability setting and examine their behavior along two dimensions: (i) whether they truly remove knowledge of a target concept (the concept we wish to remove) or merely inhibit its expression while retaining the underlying information, and (ii) how easily the supposedly erased knowledge can be recovered through various fine-tuning operations. Our results shed light within a fully interpretable setting on how knowledge can persist despite unlearning and when it might resurface.


Key Contributions

  • Uses combinatorial interpretability on two-layer networks trained on DNF clauses to directly inspect whether unlearning methods truly erase concept knowledge from weights or merely suppress its expression
  • Measures 'recovery time' (fine-tuning steps to 80% TPR on erased clauses) for Gradient Ascent, Task Vector, and Privacy-Preserving Distillation, revealing that Gradient Ascent recovers erased clauses in as few as 7 steps
  • Provides mechanistic evidence that most unlearning methods inhibit concept expression rather than removing encoded knowledge, leaving the model vulnerable to rapid capability restoration

🛡️ Threat Analysis

Model Inversion Attack

The paper attacks unlearning methods by demonstrating that training knowledge/concepts are not truly removed from model weights — the information persists and can be recovered. Per the machine unlearning decision tree, a paper that attacks an unlearning method by showing erased data/knowledge can still be extracted maps to ML03 (training knowledge that should have been erased persists in the model). The interpretability framework allows direct inspection of weight-encoded knowledge, confirming the unlearning failure mechanistically.


Details

Domains
nlpvision
Model Types
traditional_ml
Threat Tags
white_boxtraining_time
Datasets
Synthetic DNF Boolean formula datasetsDNF-shared datasets
Applications
machine unlearningmodel editingconcept erasure