attack 2026

MORE: Multi-Objective Adversarial Attacks on Speech Recognition

Xiaoxue Gao 1, Zexin Li 2, Yiming Chen 3, Nancy F. Chen 1

0 citations · 35 references · arXiv

α

Published on arXiv

2601.01852

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

MORE compels Whisper-family ASR models to produce significantly longer transcriptions (resource exhaustion) while sustaining high word error rates, outperforming efficiency-only baselines like SlothSpeech through a single adversarial audio input.

MORE (Multi-Objective Repetitive Doubling Encouragement)

Novel technique introduced


The emergence of large-scale automatic speech recognition (ASR) models such as Whisper has greatly expanded their adoption across diverse real-world applications. Ensuring robustness against even minor input perturbations is therefore critical for maintaining reliable performance in real-time environments. While prior work has mainly examined accuracy degradation under adversarial attacks, robustness with respect to efficiency remains largely unexplored. This narrow focus provides only a partial understanding of ASR model vulnerabilities. To address this gap, we conduct a comprehensive study of ASR robustness under multiple attack scenarios. We introduce MORE, a multi-objective repetitive doubling encouragement attack, which jointly degrades recognition accuracy and inference efficiency through a hierarchical staged repulsion-anchoring mechanism. Specifically, we reformulate multi-objective adversarial optimization into a hierarchical framework that sequentially achieves the dual objectives. To further amplify effectiveness, we propose a novel repetitive encouragement doubling objective (REDO) that induces duplicative text generation by maintaining accuracy degradation and periodically doubling the predicted sequence length. Overall, MORE compels ASR models to produce incorrect transcriptions at a substantially higher computational cost, triggered by a single adversarial input. Experiments show that MORE consistently yields significantly longer transcriptions while maintaining high word error rates compared to existing baselines, underscoring its effectiveness in multi-objective adversarial attack.


Key Contributions

  • MORE: a multi-objective adversarial attack that jointly degrades ASR accuracy and inference efficiency via a hierarchical repulsion-anchoring optimization strategy
  • REDO (Repetitive Encouragement Doubling Objective): a novel loss component that induces duplicative text generation by periodically doubling predicted sequence length while maintaining accuracy degradation
  • Asymmetric interleaving mechanism with EOS suppression to reinforce periodic context doubling and prevent early decoding termination

🛡️ Threat Analysis

Input Manipulation Attack

MORE crafts adversarial audio perturbations using gradient-based optimization (repulsion-anchoring mechanism) to cause misclassification (higher WER) and induce abnormally long transcriptions at inference time — a dual-objective input manipulation attack on ASR models. The attack vector is a crafted adversarial input that exploits the model's autoregressive decoding at inference time.


Details

Domains
audio
Model Types
transformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
automatic speech recognitionreal-time asr systemsvirtual assistants