Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations
Bo-Han Feng 1, Chien-Feng Liu 1, Yu-Hsuan Li Liang 1, Chih-Kai Yang 1, Szu-Wei Fu 2, Zhehuai Chen 2, Ke-Han Lu 1, Sung-Feng Huang 2, Chao-Han Huck Yang 2, Yu-Chiang Frank Wang 2, Yun-Nung Chen 1, Hung-yi Lee 1
Published on arXiv
2510.16893
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Medium-intensity emotional speech expressions pose the greatest jailbreak risk in LALMs, surpassing both low and high intensities, demonstrating that safety alignment is neither stable nor robust under emotional variation
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.
Key Contributions
- First systematic study of how speaker emotion and intensity affect safety alignment in large audio-language models (LALMs)
- Dataset of malicious speech instructions synthesized across multiple emotions and intensities using CosyVoice TTS, with human annotation for quality verification
- Discovery of a non-monotonic safety vulnerability: medium emotional intensity elicits more unsafe responses than both low and high intensities across evaluated LALMs