defense 2025

EraseFlow: Learning Concept Erasure Policies via GFlowNet-Driven Alignment

Abhiram Kusumba 1,2, Maitreya Patel 2, Kyle Min 3, Changhoon Kim 4, Chitta Baral 2, Yezhou Yang 2

1 citations · 1 influential · 64 references · arXiv

α

Published on arXiv

2511.00804

Output Integrity Attack

OWASP ML Top 10 — ML09

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

EraseFlow achieves lower adversarial attack success rate and better FID simultaneously compared to baselines, with reduced computational cost per concept erased

EraseFlow

Novel technique introduced


Erasing harmful or proprietary concepts from powerful text to image generators is an emerging safety requirement, yet current "concept erasure" techniques either collapse image quality, rely on brittle adversarial losses, or demand prohibitive retraining cycles. We trace these limitations to a myopic view of the denoising trajectories that govern diffusion based generation. We introduce EraseFlow, the first framework that casts concept unlearning as exploration in the space of denoising paths and optimizes it with GFlowNets equipped with the trajectory balance objective. By sampling entire trajectories rather than single end states, EraseFlow learns a stochastic policy that steers generation away from target concepts while preserving the model's prior. EraseFlow eliminates the need for carefully crafted reward models and by doing this, it generalizes effectively to unseen concepts and avoids hackable rewards while improving the performance. Extensive empirical results demonstrate that EraseFlow outperforms existing baselines and achieves an optimal trade off between performance and prior preservation.


Key Contributions

  • EraseFlow: first framework casting concept unlearning as trajectory-space exploration optimized via GFlowNets with trajectory balance objective
  • Reward-free alignment strategy with theoretical proof that constant reward + trajectory balance reliably erases semantic content, enabling generalization to unseen concepts
  • Demonstrated robustness against adversarial concept-reintroduction attacks while preserving image quality (FID), covering NSFW content, artistic styles, and fine-grained logos

🛡️ Threat Analysis

Input Manipulation Attack

The paper explicitly evaluates robustness against adversarial attacks that craft inputs to reintroduce erased concepts ('adversarial attack success rate' is a key metric in Figure 2). Defending against inference-time input manipulation that bypasses safety mechanisms maps to ML01.

Output Integrity Attack

Primary contribution is ensuring output integrity of text-to-image generators — preventing diffusion models from producing harmful (NSFW), copyrighted, or proprietary content. This is fundamentally about controlling/authenticating model output safety, fitting ML09's scope of output integrity and content provenance.


Details

Domains
visiongenerative
Model Types
diffusion
Threat Tags
training_timeinference_timedigital
Datasets
Stable Diffusion v1.4 evaluation benchmarksNSFW imagery benchmarks
Applications
text-to-image generationnsfw content filteringcopyright protectionconcept unlearning