defense 2026

AEGIS: Adversarial Target-Guided Retention-Data-Free Robust Concept Erasure from Diffusion Models

Fengpeng Li 1,2, Kemou Li 1, Qizhou Wang 3,4, Bo Han 3, Jiantao Zhou 1

0 citations · 57 references · arXiv (Cornell University)

α

Published on arXiv

2602.06771

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Reduces adversarial prompt attack success rate by ~5.31% on nudity and ~24% on Van Gogh style while maintaining or improving FID/CLIP retention scores compared to state-of-the-art concept erasure baselines.

AEGIS (Adversarial Erasure with Gradient-Informed Synergy)

Novel technique introduced


Concept erasure helps stop diffusion models (DMs) from generating harmful content; but current methods face robustness retention trade off. Robustness means the model fine-tuned by concept erasure methods resists reactivation of erased concepts, even under semantically related prompts. Retention means unrelated concepts are preserved so the model's overall utility stays intact. Both are critical for concept erasure in practice, yet addressing them simultaneously is challenging, as existing works typically improve one factor while sacrificing the other. Prior work typically strengthens one while degrading the other, e.g., mapping a single erased prompt to a fixed safe target leaves class level remnants exploitable by prompt attacks, whereas retention-oriented schemes underperform against adaptive adversaries. This paper introduces Adversarial Erasure with Gradient Informed Synergy (AEGIS), a retention-data-free framework that advances both robustness and retention.


Key Contributions

  • Adversarial Erasure Target (AET): an optimizable embedding iteratively updated to approximate the semantic center of the erased concept class, making erasure robust to semantically related or adversarially crafted prompts
  • Gradient Regularization Projection (GRP): a conflict-aware, retention-data-free gradient rectification that selectively projects away components of the retention update that oppose the erasure direction, preserving unrelated concept utility without auxiliary data
  • Demonstrates that existing concept erasure vulnerability stems from erasure targets that miss the concept class semantic center, and that AEGIS reduces adversarial prompt attack success rates by ~5.31% on nudity and ~24% on Van Gogh style under P4D and UnlearnDiffAtk

🛡️ Threat Analysis

Input Manipulation Attack

The paper defends against adversarial prompt attacks (P4D, UnlearnDiffAtk) that craft inputs at inference time to reactivate erased concepts in diffusion models — these are adversarial input manipulation attacks that cause the model to produce outputs the safety mechanism was designed to prevent. AEGIS hardens the model against these input-level evasion attacks.


Details

Domains
visiongenerative
Model Types
diffusion
Threat Tags
white_boxblack_boxinference_timetargeted
Datasets
I2PCOCOWikiArt
Applications
text-to-image generationdiffusion model content safetyharmful concept suppression