Towards Irreversible Machine Unlearning for Diffusion Models
Xun Yuan 1, Zilong Zhao 2, Jiayu Li 2, Aryan Pasikhani 3, Prosanta Gope 3, Biplab Sikdar 1
Published on arXiv
2512.03564
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
DiMRA successfully reverses state-of-the-art finetuning-based unlearning methods without prior knowledge of unlearned concepts, while DiMUM resists relearning while preserving generative quality better than prior approaches.
DiMRA / DiMUM
Novel technique introduced
Diffusion models are renowned for their state-of-the-art performance in generating synthetic images. However, concerns related to safety, privacy, and copyright highlight the need for machine unlearning, which can make diffusion models forget specific training data and prevent the generation of sensitive or unwanted content. Current machine unlearning methods for diffusion models are primarily designed for conditional diffusion models and focus on unlearning specific data classes or features. Among these methods, finetuning-based machine unlearning methods are recognized for their efficiency and effectiveness, which update the parameters of pre-trained diffusion models by minimizing carefully designed loss functions. However, in this paper, we propose a novel attack named Diffusion Model Relearning Attack (DiMRA), which can reverse the finetuning-based machine unlearning methods, posing a significant vulnerability of this kind of technique. Without prior knowledge of the unlearning elements, DiMRA optimizes the unlearned diffusion model on an auxiliary dataset to reverse the unlearning, enabling the model to regenerate previously unlearned elements. To mitigate this vulnerability, we propose a novel machine unlearning method for diffusion models, termed as Diffusion Model Unlearning by Memorization (DiMUM). Unlike traditional methods that focus on forgetting, DiMUM memorizes alternative data or features to replace targeted unlearning data or features in order to prevent generating such elements. In our experiments, we demonstrate the effectiveness of DiMRA in reversing state-of-the-art finetuning-based machine unlearning methods for diffusion models, highlighting the need for more robust solutions. We extensively evaluate DiMUM, demonstrating its superior ability to preserve the generative performance of diffusion models while enhancing robustness against DiMRA.
Key Contributions
- DiMRA: a relearning attack that reverses finetuning-based machine unlearning in diffusion models by fine-tuning on auxiliary data without knowledge of unlearned elements
- Analysis showing existing unlearning fails because non-convergent unlearning losses keep model parameters close to the pre-trained model, making reversal feasible
- DiMUM: a memorization-based unlearning defense that replaces target concepts with alternative data/features rather than simply forgetting, providing robustness against DiMRA
🛡️ Threat Analysis
Machine unlearning here is a content safety mechanism ensuring diffusion models don't generate harmful, copyrighted, or private content — i.e., it enforces output integrity. DiMRA attacks this mechanism, defeating the content suppression and allowing the model to regenerate previously forbidden outputs. DiMUM defends by making unlearning irreversible. This directly targets output content safety and integrity.