attack 2025

Erased, But Not Forgotten: Erased Rectified Flow Transformers Still Remain Unsafe Under Concept Attack

Nanxiang Jiang 1, Yifan Sun 1, Enhan Kang 1, Daiheng Gao 2, Yun Zhou 3, Yanxia Chang 1, Zheng Zhu 4, Yeying Jin 5, Wenjun Wu 1

1 citations · 59 references · arXiv

α

Published on arXiv

2510.00635

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

ReFlux successfully reactivates erased concepts in state-of-the-art Flux-based concept-erased models using only 3.57 MB of optimized parameters, demonstrating that current concept erasure techniques transfer poorly to rectified flow transformers.

ReFlux

Novel technique introduced


Recent advances in text-to-image (T2I) diffusion models have enabled impressive generative capabilities, but they also raise significant safety concerns due to the potential to produce harmful or undesirable content. While concept erasure has been explored as a mitigation strategy, most existing approaches and corresponding attack evaluations are tailored to Stable Diffusion (SD) and exhibit limited effectiveness when transferred to next-generation rectified flow transformers such as Flux. In this work, we present ReFlux, the first concept attack method specifically designed to assess the robustness of concept erasure in the latest rectified flow-based T2I framework. Our approach is motivated by the observation that existing concept erasure techniques, when applied to Flux, fundamentally rely on a phenomenon known as attention localization. Building on this insight, we propose a simple yet effective attack strategy that specifically targets this property. At its core, a reverse-attention optimization strategy is introduced to effectively reactivate suppressed signals while stabilizing attention. This is further reinforced by a velocity-guided dynamic that enhances the robustness of concept reactivation by steering the flow matching process, and a consistency-preserving objective that maintains the global layout and preserves unrelated content. Extensive experiments consistently demonstrate the effectiveness and efficiency of the proposed attack method, establishing a reliable benchmark for evaluating the robustness of concept erasure strategies in rectified flow transformers.


Key Contributions

  • First concept attack method (ReFlux) specifically designed for rectified flow-based T2I models (Flux), exposing the failure of existing concept erasure techniques on next-generation architectures
  • Reverse-attention optimization strategy that targets attention localization — the mechanism underlying Flux-compatible concept erasure — to reactivate suppressed concepts
  • Velocity-guided dynamic and consistency-preserving objective that stabilize concept reactivation while preserving unrelated image content; entire attack requires only 3.57 MB of parameters

🛡️ Threat Analysis

Input Manipulation Attack

ReFlux is an adversarial evasion attack against a deployed safety mechanism (concept erasure) in a generative model. The attack optimizes a small adapter using reverse-attention optimization and velocity-guided dynamics to force the model to produce outputs its safety fine-tuning was designed to suppress — analogous to adversarial examples bypassing safety classifiers, applied here at the generative model level.


Details

Domains
visiongenerative
Model Types
diffusiontransformer
Threat Tags
white_boxinference_timetargeteddigital
Datasets
Flux.1 [dev]
Applications
text-to-image generationcontent safety systems