defense 2025

SARSteer: Safeguarding Large Audio Language Models via Safe-Ablated Refusal Steering

Weilin Lin 1, Jianze Li 2, Hui Xiong 1, Li Liu 1

1 citations · 44 references · arXiv

α

Published on arXiv

2510.17633

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

SARSteer significantly improves harmful-query refusal rates in LALMs while preserving benign response quality, outperforming vanilla LLM safety alignment adaptations.

SARSteer (Safe-Ablated Refusal Steering)

Novel technique introduced


Large Audio-Language Models (LALMs) are becoming essential as a powerful multimodal backbone for real-world applications. However, recent studies show that audio inputs can more easily elicit harmful responses than text, exposing new risks toward deployment. While safety alignment has made initial advances in LLMs and Large Vision-Language Models (LVLMs), we find that vanilla adaptation of these approaches to LALMs faces two key limitations: 1) LLM-based steering fails under audio input due to the large distributional gap between activations, and 2) prompt-based defenses induce over-refusals on benign-speech queries. To address these challenges, we propose Safe-Ablated Refusal Steering (SARSteer), the first inference-time defense framework for LALMs. Specifically, SARSteer leverages text-derived refusal steering to enforce rejection without manipulating audio inputs and introduces decomposed safe-space ablation to mitigate over-refusal. Extensive experiments demonstrate that SARSteer significantly improves harmful-query refusal while preserving benign responses, establishing a principled step toward safety alignment in LALMs.


Key Contributions

  • Identifies two failure modes of existing safety alignment approaches when applied to LALMs: distributional gap between audio and text activations causing LLM-based steering to fail, and prompt-based defenses inducing over-refusals
  • Proposes SARSteer, the first inference-time defense framework for LALMs, using text-derived refusal steering to enforce rejection without manipulating audio inputs
  • Introduces decomposed safe-space ablation to mitigate over-refusal on benign speech queries while preserving harmful-query rejection

🛡️ Threat Analysis


Details

Domains
audiomultimodalnlp
Model Types
llmmultimodal
Threat Tags
inference_time
Applications
large audio-language modelsvoice assistantsmultimodal ai safety