benchmark arXiv Oct 19, 2025 · Oct 2025
Bo-Han Feng, Chien-Feng Liu, Yu-Hsuan Li Liang et al. · National Taiwan University · NVIDIA
Reveals that speaker emotional intensity systematically jailbreaks audio-language models, with medium intensity posing the greatest safety risk
Prompt Injection audiomultimodalnlp
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.
llm multimodal National Taiwan University · NVIDIA