attack 2025

Taught Well Learned Ill: Towards Distillation-conditional Backdoor Attack

Yukun Chen 1, Boheng Li 2, Yu Yuan 1, Leyi Qi 1, Yiming Li 2, Tianwei Zhang 2, Zhan Qin 1, Kui Ren 1

2 citations · 1 influential · 85 references · arXiv

α

Published on arXiv

2509.23871

Model Poisoning

OWASP ML Top 10 — ML10

Transfer Learning Attack

OWASP ML Top 10 — ML07

Key Finding

SCAR successfully injects backdoors into teacher models that pass standard backdoor detection, yet reliably activate in student models produced via clean-dataset knowledge distillation.

SCAR

Novel technique introduced


Knowledge distillation (KD) is a vital technique for deploying deep neural networks (DNNs) on resource-constrained devices by transferring knowledge from large teacher models to lightweight student models. While teacher models from third-party platforms may undergo security verification (\eg, backdoor detection), we uncover a novel and critical threat: distillation-conditional backdoor attacks (DCBAs). DCBA injects dormant and undetectable backdoors into teacher models, which become activated in student models via the KD process, even with clean distillation datasets. While the direct extension of existing methods is ineffective for DCBA, we implement this attack by formulating it as a bilevel optimization problem and proposing a simple yet effective method (\ie, SCAR). Specifically, the inner optimization simulates the KD process by optimizing a surrogate student model, while the outer optimization leverages outputs from this surrogate to optimize the teacher model for implanting the conditional backdoor. Our SCAR addresses this complex optimization utilizing an implicit differentiation algorithm with a pre-optimized trigger injection function. Extensive experiments across diverse datasets, model architectures, and KD techniques validate the effectiveness of our SCAR and its resistance against existing backdoor detection, highlighting a significant yet previously overlooked vulnerability in the KD process. Our code is available at https://github.com/WhitolfChen/SCAR.


Key Contributions

  • Identifies and formalizes the distillation-conditional backdoor attack (DCBA) threat, where backdoors pass teacher-model detection but activate in student models via knowledge distillation
  • Proposes SCAR, a bilevel optimization method using implicit differentiation and a pre-optimized trigger injection function to implement DCBA
  • Demonstrates SCAR's effectiveness and resistance to existing backdoor detection across diverse datasets, model architectures, and KD techniques

🛡️ Threat Analysis

Transfer Learning Attack

The attack is explicitly conditioned on and exploits the knowledge distillation (transfer learning) process — backdoors are dormant in the teacher but are designed to activate specifically through the KD process in the student, making the transfer learning pipeline itself the attack vector.

Model Poisoning

DCBA/SCAR is a backdoor/trojan attack: it implants hidden, trigger-activated malicious behavior in teacher models that surfaces only in student models, with the model behaving normally otherwise and resisting existing backdoor detection methods.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
training_timetargeteddigital
Datasets
CIFAR-10CIFAR-100ImageNet
Applications
image classificationmodel compressionknowledge distillation pipelines