tool 2025

Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods

Jaeung Lee 1, Suhyeon Yu 2, Yurim Jang 1, Simon S. Woo 1, Jaemin Jo 1

0 citations

α

Published on arXiv

2508.12730

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

The Unlearning Comparator system enables researchers to identify behavioral differences between unlearning methods at multiple granularities and evaluate privacy-utility trade-offs through integrated MIA simulation, yielding insights that inform method improvement

Unlearning Comparator

Novel technique introduced


Machine Unlearning (MU) aims to remove target training data from a trained model so that the removed data no longer influences the model's behavior, fulfilling "right to be forgotten" obligations under data privacy laws. Yet, we observe that researchers in this rapidly emerging field face challenges in analyzing and understanding the behavior of different MU methods, especially in terms of three fundamental principles in MU: accuracy, efficiency, and privacy. Consequently, they often rely on aggregate metrics and ad-hoc evaluations, making it difficult to accurately assess the trade-offs between methods. To fill this gap, we introduce a visual analytics system, Unlearning Comparator, designed to facilitate the systematic evaluation of MU methods. Our system supports two important tasks in the evaluation process: model comparison and attack simulation. First, it allows the user to compare the behaviors of two models, such as a model generated by a certain method and a retrained baseline, at class-, instance-, and layer-levels to better understand the changes made after unlearning. Second, our system simulates membership inference attacks (MIAs) to evaluate the privacy of a method, where an attacker attempts to determine whether specific data samples were part of the original training set. We evaluate our system through a case study visually analyzing prominent MU methods and demonstrate that it helps the user not only understand model behaviors but also gain insights that can inform the improvement of MU methods. The source code is publicly available at https://github.com/gnueaj/Machine-Unlearning-Comparator.


Key Contributions

  • Unlearning Comparator: an interactive visual analytics system for systematic comparative evaluation of machine unlearning methods across accuracy, efficiency, and privacy dimensions
  • Multi-level model comparison (class-, instance-, and layer-level) between unlearned models and retrained baselines to reveal post-unlearning behavioral changes
  • Integrated MIA simulation module that evaluates privacy preservation by testing whether an attacker can still determine membership of unlearned data

🛡️ Threat Analysis

Membership Inference Attack

The system's second core task is explicitly simulating membership inference attacks to evaluate whether unlearned data samples can still be identified as training members — MIA is a primary feature, not a secondary mention.


Details

Domains
vision
Model Types
cnn
Threat Tags
inference_timeblack_box
Applications
machine unlearning evaluationmodel comparisonprivacy auditing