defense 2026

STEP: Detecting Audio Backdoor Attacks via Stability-based Trigger Exposure Profiling

Kun Wang 1, Meng Chen 1, Junhao Wang 1, Yuli Wu 1, Li Lu 2, Chong Zhang 1, Peng Cheng 1, Jiaheng Zhang 3, Kui Ren 1

0 citations

α

Published on arXiv

2603.18103

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

Achieves average AUROC of 97.92% and EER of 4.54% across seven backdoor attacks, substantially outperforming state-of-the-art baselines under black-box hard-label-only access

STEP

Novel technique introduced


With the widespread deployment of deep-learning-based speech models in security-critical applications, backdoor attacks have emerged as a serious threat: an adversary who poisons a small fraction of training data can implant a hidden trigger that controls the model's output while preserving normal behavior on clean inputs. Existing inference-time defenses are not well suited to the audio domain, as they either rely on trigger over-robustness assumptions that fail on transformation-based and semantic triggers, or depend on properties specific to image or text modalities. In this paper, we propose STEP (Stability-based Trigger Exposure Profiling), a black-box, retraining-free backdoor detector that operates under hard-label-only access. Its core idea is to exploit a characteristic dual anomaly of backdoor triggers: anomalous label stability under semantic-breaking perturbations, and anomalous label fragility under semantic-preserving perturbations. STEP profiles each test sample with two complementary perturbation branches that target these two properties respectively, scores the resulting stability features with one-class anomaly detectors trained on benign references, and fuses the two scores via unsupervised weighting. Extensive experiments across seven backdoor attacks show that STEP achieves an average AUROC of 97.92% and EER of 4.54%, substantially outperforming state-of-the-art baselines, and generalizes across model architectures, speech tasks, an open-set verification scenario, and over-the-air physical-world settings.


Key Contributions

  • STEP defense exploiting dual anomaly: backdoor triggers show anomalous stability under semantic-breaking perturbations AND anomalous fragility under semantic-preserving perturbations
  • Black-box, hard-label-only backdoor detection requiring no model internals, training data access, or retraining
  • Achieves 97.92% average AUROC across seven backdoor attacks, generalizes across model architectures, speech tasks, and physical over-the-air settings

🛡️ Threat Analysis

Model Poisoning

The paper proposes a defense mechanism to detect backdoor/trojan attacks on speech models. Backdoors are hidden malicious behaviors activated by specific triggers while maintaining normal behavior on clean inputs—this is the core definition of ML10 (Model Poisoning/Backdoors & Trojans).


Details

Domains
audio
Model Types
cnntransformertraditional_ml
Threat Tags
black_boxinference_timetraining_timephysical
Applications
speaker recognitionspeech command recognitionautomatic speech recognitionvoice authentication