Degrading Voice: A Comprehensive Overview of Robust Voice Conversion Through Input Manipulation
Xining Song 1, Zhihua Wei 1, Rui Wang 2, Haixiao Hu 3, Yanxiang Chen 4, Meng Han 3
Published on arXiv
2512.06304
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Identifies a significant gap in robustness understanding of voice conversion models under diverse input degradation, proposing a unified attack-defense taxonomy and multi-dimensional evaluation framework
Identity, accent, style, and emotions are essential components of human speech. Voice conversion (VC) techniques process the speech signals of two input speakers and other modalities of auxiliary information such as prompts and emotion tags. It changes para-linguistic features from one to another, while maintaining linguistic contents. Recently, VC models have made rapid advancements in both generation quality and personalization capabilities. These developments have attracted considerable attention for diverse applications, including privacy preservation, voice-print reproduction for the deceased, and dysarthric speech recovery. However, these models only learn non-robust features due to the clean training data. Subsequently, it results in unsatisfactory performances when dealing with degraded input speech in real-world scenarios, including additional noise, reverberation, adversarial attacks, or even minor perturbation. Hence, it demands robust deployments, especially in real-world settings. Although latest researches attempt to find potential attacks and countermeasures for VC systems, there remains a significant gap in the comprehensive understanding of how robust the VC model is under input manipulation. here also raises many questions: For instance, to what extent do different forms of input degradation attacks alter the expected output of VC models? Is there potential for optimizing these attack and defense strategies? To answer these questions, we classify existing attack and defense methods from the perspective of input manipulation and evaluate the impact of degraded input speech across four dimensions, including intelligibility, naturalness, timbre similarity, and subjective perception. Finally, we outline open issues and future directions.
Key Contributions
- Taxonomy of input manipulation attacks on voice conversion systems, covering adversarial perturbations, noise injection, and reverberation
- Classification of defense strategies for robust VC deployment organized by defensive state
- Evaluation framework assessing VC robustness across intelligibility, naturalness, timbre similarity, and subjective perception
🛡️ Threat Analysis
Comprehensively surveys input manipulation attacks — adversarial perturbations, additive noise, reverberation — that degrade voice conversion model outputs at inference time, alongside countermeasures. The VC model is the attack target, and the threat model is inference-time input degradation.