defense 2025

Sealing The Backdoor: Unlearning Adversarial Text Triggers In Diffusion Models Using Knowledge Distillation

Ashwath Vaithinathan Aravindan , Abha Jha , Matthew Salaway , Atharva Sandeep Bhide , Duygu Nur Yaldiz

0 citations

α

Published on arXiv

2508.18235

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves 100% backdoor removal accuracy for pixel-pattern triggers and 93% for style-based attacks in text-to-image diffusion models without sacrificing image fidelity or generation robustness.

SKD-CAG (Self-Knowledge Distillation with Cross-Attention Guidance)

Novel technique introduced


Text-to-image diffusion models have revolutionized generative AI, but their vulnerability to backdoor attacks poses significant security risks. Adversaries can inject imperceptible textual triggers into training data, causing models to generate manipulated outputs. Although text-based backdoor defenses in classification models are well-explored, generative models lack effective mitigation techniques against. We address this by selectively erasing the model's learned associations between adversarial text triggers and poisoned outputs, while preserving overall generation quality. Our approach, Self-Knowledge Distillation with Cross-Attention Guidance (SKD-CAG), uses knowledge distillation to guide the model in correcting responses to poisoned prompts while maintaining image quality by exploiting the fact that the backdoored model still produces clean outputs in the absence of triggers. Using the cross-attention mechanism, SKD-CAG neutralizes backdoor influences at the attention level, ensuring the targeted removal of adversarial effects. Extensive experiments show that our method outperforms existing approaches, achieving removal accuracy 100\% for pixel backdoors and 93\% for style-based attacks, without sacrificing robustness or image fidelity. Our findings highlight targeted unlearning as a promising defense to secure generative models. Code and model weights can be found at https://github.com/Mystic-Slice/Sealing-The-Backdoor .


Key Contributions

  • SKD-CAG: Self-Knowledge Distillation with Cross-Attention Guidance that exploits the backdoored model itself as a clean teacher (on trigger-free prompts) to guide removal of adversarial trigger associations
  • Cross-attention-level neutralization of backdoor influence, ensuring precise removal without degrading overall generation quality
  • Achieves 100% removal accuracy for pixel-pattern backdoors and 93% for style-based backdoors, outperforming existing defenses

🛡️ Threat Analysis

Model Poisoning

The paper directly defends against backdoor attacks where adversarial text triggers are injected into training data to cause diffusion models to produce manipulated outputs. SKD-CAG is a targeted unlearning defense that neutralizes the hidden trigger-to-output associations at the cross-attention level.


Details

Domains
visionnlpgenerative
Model Types
diffusiontransformer
Threat Tags
training_timetargeted
Applications
text-to-image generation