attack 2026

REFORGE: Multi-modal Attacks Reveal Vulnerable Concept Unlearning in Image Generation Models

Yong Zou 1, Haoran Li 2, Fanxiao Li 1, Shenyang Wei 1, Yunyun Dong 1, Li Tang 1, Wei Zhou 1, Renyang Liu 3

0 citations

α

Published on arXiv

2603.16576

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Significantly improves attack success rate over baselines while achieving stronger semantic alignment and higher efficiency in recovering erased concepts from unlearned diffusion models

REFORGE

Novel technique introduced


Recent progress in image generation models (IGMs) enables high-fidelity content creation but also amplifies risks, including the reproduction of copyrighted content and the generation of offensive content. Image Generation Model Unlearning (IGMU) mitigates these risks by removing harmful concepts without full retraining. Despite growing attention, the robustness under adversarial inputs, particularly image-side threats in black-box settings, remains underexplored. To bridge this gap, we present REFORGE, a black-box red-teaming framework that evaluates IGMU robustness via adversarial image prompts. REFORGE initializes stroke-based images and optimizes perturbations with a cross-attention-guided masking strategy that allocates noise to concept-relevant regions, balancing attack efficacy and visual fidelity. Extensive experiments across representative unlearning tasks and defenses demonstrate that REFORGE significantly improves attack success rate while achieving stronger semantic alignment and higher efficiency than involved baselines. These results expose persistent vulnerabilities in current IGMU methods and highlight the need for robustness-aware unlearning against multi-modal adversarial attacks. Our code is at: https://github.com/Imfatnoily/REFORGE.


Key Contributions

  • First black-box red-teaming framework targeting image-side inputs to bypass concept unlearning in text-to-image models
  • Cross-attention-guided masking strategy that allocates adversarial perturbations to concept-relevant regions for effective attacks with visual fidelity
  • Extensive evaluation across multiple unlearning methods demonstrating persistent vulnerabilities in current IGMU techniques

🛡️ Threat Analysis

Input Manipulation Attack

Crafts adversarial image prompts (stroke-based images with optimized perturbations) to manipulate model outputs at inference time, causing misclassification/bypassing of unlearning mechanisms.


Details

Domains
visionmultimodalgenerative
Model Types
diffusionmultimodal
Threat Tags
black_boxinference_timetargeteddigital
Applications
text-to-image generationconcept unlearning evaluationdiffusion model safety