defense 2026

Protecting the Undeleted in Machine Unlearning

Aloni Cohen 1, Refael Kohen 2, Kobbi Nissim 3, Uri Stemmer 2

0 citations · 26 references · arXiv (Cornell University)

α

Published on arXiv

2602.16697

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

An adversary controlling only ω(1) data points can reconstruct nearly the entire training dataset by issuing deletion requests to any perfect-retraining-based unlearning mechanism, revealing a fundamental privacy risk for undeleted users.


Machine unlearning aims to remove specific data points from a trained model, often striving to emulate "perfect retraining", i.e., producing the model that would have been obtained had the deleted data never been included. We demonstrate that this approach, and security definitions that enable it, carry significant privacy risks for the remaining (undeleted) data points. We present a reconstruction attack showing that for certain tasks, which can be computed securely without deletions, a mechanism adhering to perfect retraining allows an adversary controlling merely $ω(1)$ data points to reconstruct almost the entire dataset merely by issuing deletion requests. We survey existing definitions for machine unlearning, showing they are either susceptible to such attacks or too restrictive to support basic functionalities like exact summation. To address this problem, we propose a new security definition that specifically safeguards undeleted data against leakage caused by the deletion of other points. We show that our definition permits several essential functionalities, such as bulletin boards, summations, and statistical learning.


Key Contributions

  • Reconstruction attack showing that any perfect-retraining-based unlearning mechanism allows an adversary with ω(1) controlled points to recover nearly all undeleted training data via deletion requests alone
  • Survey of existing machine unlearning security definitions demonstrating they are either vulnerable to the attack or too restrictive to support basic functionalities (e.g., exact summation)
  • New security definition that provably protects undeleted data from leakage caused by others' deletions, while permitting bulletin boards, summations, and statistical learning

🛡️ Threat Analysis

Model Inversion Attack

The paper's central technical contribution is a reconstruction attack: an adversary controlling ω(1) data points can reconstruct almost the entire training dataset by issuing deletion requests and observing output changes before/after each deletion. This is a training data reconstruction attack with an explicit adversary. The proposed security definition defends against this data leakage, also targeting the ML03 threat.


Details

Model Types
traditional_ml
Threat Tags
white_boxtraining_time
Applications
statistical learningmachine unlearning systems