Cross-Modal Robustness Transfer (CMRT): Training Robust Speech Translation Models Using Adversarial Text
Abderrahmane Issam , Yusuf Can Semerci , Jan Scholtes , Gerasimos Spanakis
Published on arXiv
2602.11933
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
CMRT improves adversarial robustness by more than 3 BLEU points on average across four language pairs without requiring any adversarial speech data during training.
CMRT (Cross-Modal Robustness Transfer)
Novel technique introduced
End-to-End Speech Translation (E2E-ST) has seen significant advancements, yet current models are primarily benchmarked on curated, "clean" datasets. This overlooks critical real-world challenges, such as morphological robustness to inflectional variations common in non-native or dialectal speech. In this work, we adapt a text-based adversarial attack targeting inflectional morphology to the speech domain and demonstrate that state-of-the-art E2E-ST models are highly vulnerable it. While adversarial training effectively mitigates such risks in text-based tasks, generating high-quality adversarial speech data remains computationally expensive and technically challenging. To address this, we propose Cross-Modal Robustness Transfer (CMRT), a framework that transfers adversarial robustness from the text modality to the speech modality. Our method eliminates the requirement for adversarial speech data during training. Extensive experiments across four language pairs demonstrate that CMRT improves adversarial robustness by an average of more than 3 BLEU points, establishing a new baseline for robust E2E-ST without the overhead of generating adversarial speech.
Key Contributions
- Speech-MORPHEUS: adaptation of the MORPHEUS inflectional adversarial attack from NMT to the speech translation domain, demonstrating high vulnerability of state-of-the-art E2E-ST models
- CMRT framework: a two-stage method (semantic alignment + robustness fine-tuning) that transfers adversarial robustness from text to speech by injecting adversarial text embeddings into a shared cross-modal latent space
- Demonstrated 3+ BLEU point improvement on adversarial speech across four language pairs without adversarial speech generation, while preserving clean-set performance
🛡️ Threat Analysis
The paper adapts inflectional morphology adversarial perturbations (Speech-MORPHEUS) to attack E2E-ST models at inference time, causing degraded translation outputs, and proposes CMRT as an adversarial training defense that improves robustness without requiring adversarial speech data.