defense arXiv Mar 23, 2026 · 14d ago
Matías Pizarro, Raghavan Narasimhan, Asja Fischer · Ruhr University Bochum
Defense against audio adversarial attacks by randomly varying model precision during inference and detecting attacks via precision-based output comparison
Input Manipulation Attack audio
With the increasing deployment of automated and agentic systems, ensuring the adversarial robustness of automatic speech recognition (ASR) models has become critical. We observe that changing the precision of an ASR model during inference reduces the likelihood of adversarial attacks succeeding. We take advantage of this fact to make the models more robust by simple random sampling of the precision during prediction. Moreover, the insight can be turned into an adversarial example detection strategy by comparing outputs resulting from different precisions and leveraging a simple Gaussian classifier. An experimental analysis demonstrates a significant increase in robustness and competitive detection performance for various ASR models and attack types.
cnn transformer Ruhr University Bochum