defense 2025

NT-ML: Backdoor Defense via Non-target Label Training and Mutual Learning

Wenjie Huo , Katinka Wolter

0 citations

α

Published on arXiv

2508.05404

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

NT-ML effectively defends against 6 backdoor attacks including clean-label Narcissus with few clean samples, outperforming 5 state-of-the-art backdoor defenses

NT-ML

Novel technique introduced


Recent studies have shown that deep neural networks (DNNs) are vulnerable to backdoor attacks, where a designed trigger is injected into the dataset, causing erroneous predictions when activated. In this paper, we propose a novel defense mechanism, Non-target label Training and Mutual Learning (NT-ML), which can successfully restore the poisoned model under advanced backdoor attacks. NT aims to reduce the harm of poisoned data by retraining the model with the outputs of the standard training. At this stage, a teacher model with high accuracy on clean data and a student model with higher confidence in correct prediction on poisoned data are obtained. Then, the teacher and student can learn the strengths from each other through ML to obtain a purified student model. Extensive experiments show that NT-ML can effectively defend against 6 backdoor attacks with a small number of clean samples, and outperforms 5 state-of-the-art backdoor defenses.


Key Contributions

  • Non-target label Training (NT) stage that retrains a poisoned model to reduce trigger influence, yielding a clean-accurate teacher and a poisoned-confident student
  • Mutual Learning (ML) stage where teacher and student exchange complementary strengths to produce a purified student model
  • Defense effective with only a small number of clean samples, outperforming 5 state-of-the-art defenses across 6 backdoor attacks including the challenging clean-label Narcissus attack

🛡️ Threat Analysis

Model Poisoning

Proposes NT-ML as a model-level defense against backdoor/trojan attacks, where triggers injected into training data cause targeted misclassification; defends against 6 attack variants including clean-label attacks like Narcissus.


Details

Domains
vision
Model Types
cnn
Threat Tags
training_timetargeteddigital
Datasets
CIFAR-10
Applications
image classification