defense 2025

Beyond Memorization: Selective Learning for Copyright-Safe Diffusion Model Training

Divya Kothandaraman , Jaclyn Pytlarz

0 citations · 54 references · arXiv (Cornell University)

α

Published on arXiv

2512.11194

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Gradient projection onto the orthogonal complement of sensitive embedding spaces drastically reduces concept-level memorization in diffusion models while preserving generation quality against adversarial feature extraction.

Gradient Projection for Concept-Level Feature Exclusion

Novel technique introduced


Memorization in large-scale text-to-image diffusion models poses significant security and intellectual property risks, enabling adversarial attribute extraction and the unauthorized reproduction of sensitive or proprietary features. While conventional dememorization techniques, such as regularization and data filtering, limit overfitting to specific training examples, they fail to systematically prevent the internalization of prohibited concept-level features. Simply discarding all images containing a sensitive feature wastes invaluable training data, necessitating a method for selective learning at the concept level. We introduce a gradient projection method designed to enforce a stringent requirement of concept-level feature exclusion. Our defense operates during backpropagation by systematically identifying and excising training signals aligned with embeddings of prohibited attributes. Specifically, we project each gradient update onto the orthogonal complement of the sensitive feature's embedding space, thereby zeroing out its influence on the model's weights. Our method integrates seamlessly into standard diffusion model training pipelines and complements existing defenses. We analyze our method against an adversary aiming for feature extraction. In extensive experiments, we demonstrate that our framework drastically reduces memorization while rigorously preserving generation quality and semantic fidelity. By reframing memorization control as selective learning, our approach establishes a new paradigm for IP-safe and privacy-preserving generative AI.


Key Contributions

  • Gradient projection method that projects each weight update onto the orthogonal complement of a sensitive feature's embedding space, zeroing out its influence during backpropagation
  • Concept-level selective learning paradigm that excludes prohibited attributes without discarding entire training images containing them
  • Empirical evaluation against an adversary performing feature extraction, demonstrating drastic memorization reduction while preserving generation quality and semantic fidelity

🛡️ Threat Analysis

Model Inversion Attack

The paper's explicit threat model is an adversary performing 'feature extraction' — recovering private training data attributes (sensitive features, proprietary content) from a trained diffusion model. The gradient projection defense prevents the model from internalizing prohibited concept-level features during training, directly neutralizing model inversion / adversarial attribute extraction attacks.


Details

Domains
visiongenerative
Model Types
diffusion
Threat Tags
training_timewhite_boxtargeted
Applications
text-to-image generationcopyright-safe generative aiprivacy-preserving diffusion model training