ALMGuard: Safety Shortcuts and Where to Find Them as Guardrails for Audio-Language Models
Weifei Jin 1, Yuxin Cao 2, Junjie Su 1, Minhui Xue 3,4, Jie Hao 1, Ke Xu 5, Jin Song Dong 2, Derui Wang 3
Published on arXiv
2510.26096
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
ALMGuard reduces the average success rate of advanced ALM-specific jailbreak attacks to 4.6% across four models while maintaining comparable utility on benign benchmarks, achieving state-of-the-art defense.
ALMGuard (SAP + M-GSM)
Novel technique introduced
Recent advances in Audio-Language Models (ALMs) have significantly improved multimodal understanding capabilities. However, the introduction of the audio modality also brings new and unique vulnerability vectors. Previous studies have proposed jailbreak attacks that specifically target ALMs, revealing that defenses directly transferred from traditional audio adversarial attacks or text-based Large Language Model (LLM) jailbreaks are largely ineffective against these ALM-specific threats. To address this issue, we propose ALMGuard, the first defense framework tailored to ALMs. Based on the assumption that safety-aligned shortcuts naturally exist in ALMs, we design a method to identify universal Shortcut Activation Perturbations (SAPs) that serve as triggers that activate the safety shortcuts to safeguard ALMs at inference time. To better sift out effective triggers while preserving the model's utility on benign tasks, we further propose Mel-Gradient Sparse Mask (M-GSM), which restricts perturbations to Mel-frequency bins that are sensitive to jailbreaks but insensitive to speech understanding. Both theoretical analyses and empirical results demonstrate the robustness of our method against both seen and unseen attacks. Overall, \MethodName reduces the average success rate of advanced ALM-specific jailbreak attacks to 4.6% across four models, while maintaining comparable utility on benign benchmarks, establishing it as the new state of the art. Our code and data are available at https://github.com/WeifeiJin/ALMGuard.
Key Contributions
- Introduces ALMGuard, the first defense framework tailored to Audio-Language Models, based on the hypothesis that safety-aligned shortcuts naturally exist in well-aligned ALMs
- Proposes Shortcut Activation Perturbations (SAPs), universal acoustic perturbations that activate safety shortcuts at inference time without any model retraining
- Introduces Mel-Gradient Sparse Mask (M-GSM) to restrict perturbations to Mel-frequency bins sensitive to jailbreaks but insensitive to speech understanding, preserving benign utility
🛡️ Threat Analysis
The attacks being defended against use adversarial audio perturbations (e.g., AdvWave) — crafted audio inputs that manipulate ALM inference-time behavior. SAPs themselves are gradient-optimized universal acoustic perturbations, directly placing this in the adversarial input manipulation space for audio modality.