defense 2025

Illuminating the Black Box: Real-Time Monitoring of Backdoor Unlearning in CNNs via Explainable AI

Tien Dat Hoang

0 citations · 7 references · arXiv

α

Published on arXiv

2511.21291

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Reduces Attack Success Rate from 96.51% to 5.52% (94.28% reduction) on CIFAR-10 BadNets while retaining 99.48% of original clean accuracy (82.06%)

Trigger Attention Ratio (TAR)

Novel technique introduced


Backdoor attacks pose severe security threats to deep neural networks by embedding malicious triggers that force misclassification. While machine unlearning techniques can remove backdoor behaviors, current methods lack transparency and real-time interpretability. This paper introduces a novel framework that integrates Gradient-weighted Class Activation Mapping (Grad-CAM) into the unlearning process to provide real-time monitoring and explainability. We propose the Trigger Attention Ratio (TAR) metric to quantitatively measure the model's attention shift from trigger patterns to legitimate object features. Our balanced unlearning strategy combines gradient ascent on backdoor samples, Elastic Weight Consolidation (EWC) for catastrophic forgetting prevention, and a recovery phase for clean accuracy restoration. Experiments on CIFAR-10 with BadNets attacks demonstrate that our approach reduces Attack Success Rate (ASR) from 96.51% to 5.52% while retaining 99.48% of clean accuracy (82.06%), achieving a 94.28% ASR reduction. The integration of explainable AI enables transparent, observable, and verifiable backdoor removal.


Key Contributions

  • Real-time XAI monitoring framework integrating Grad-CAM directly into the backdoor unlearning training loop to visualize attention shifts as triggers are removed
  • Trigger Attention Ratio (TAR) metric that quantitatively measures the ratio of model attention on trigger regions versus legitimate object regions during unlearning
  • Balanced unlearning strategy combining gradient ascent on backdoor samples, Elastic Weight Consolidation (EWC) for catastrophic forgetting prevention, and a clean-accuracy recovery phase

🛡️ Threat Analysis

Model Poisoning

The paper proposes a defense against backdoor/trojan attacks specifically: it removes hidden backdoor behaviors (BadNets triggers) from trained CNNs using a machine unlearning strategy, reducing ASR from 96.51% to 5.52%. The threat model is a trigger-based backdoor, and the evaluation is measured directly by Attack Success Rate — squarely ML10.


Details

Domains
vision
Model Types
cnn
Threat Tags
training_timetargeteddigital
Datasets
CIFAR-10
Applications
image classification