attack 2026

Sirens' Whisper: Inaudible Near-Ultrasonic Jailbreaks of Speech-Driven LLMs

Zijian Ling 1,2, Pingyi Hu 1, Xiuyong Gao 1, Xiaojing Ma 1, Man Zhou 1,2, Jun Feng 1, Songfeng Lu 1, Dongmei Zhang 3, Bin Benjamin Zhu 3

0 citations

α

Published on arXiv

2603.13847

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 0.94 non-refusal rate and 0.925 specific-convincing score on commercial speech-driven LLMs with jailbreak audio perceptually indistinguishable from background noise to human listeners

Sirens' Whisper (SWhisper)

Novel technique introduced


Speech-driven large language models (LLMs) are increasingly accessed through speech interfaces, introducing new security risks via open acoustic channels. We present Sirens' Whisper (SWhisper), the first practical framework for covert prompt-based attacks against speech-driven LLMs under realistic black-box conditions using commodity hardware. SWhisper enables robust, inaudible delivery of arbitrary target baseband audio-including long and structured prompts-on commodity devices by encoding it into near-ultrasound waveforms that demodulate faithfully after acoustic transmission and microphone nonlinearity. This is achieved through a simple yet effective approach to modeling nonlinear channel characteristics across devices and environments, combined with lightweight channel-inversion pre-compensation. Building on this high-fidelity covert channel, we design a voice-aware jailbreak generation method that ensures intelligibility, brevity, and transferability under speech-driven interfaces. Experiments across both commercial and open-source speech-driven LLMs demonstrate strong black-box effectiveness. On commercial models, SWhisper achieves up to 0.94 non-refusal (NR) and 0.925 specific-convincing (SC). A controlled user study further shows that the injected jailbreak audio is perceptually indistinguishable from background-only playback for human listeners. Although jailbreaks serve as a case study, the underlying covert acoustic channel enables a broader class of high-fidelity prompt-injection and commandexecution attacks.


Key Contributions

  • First practical inaudible near-ultrasonic acoustic channel for delivering arbitrary prompts to speech-driven LLMs using commodity hardware
  • Nonlinear channel modeling and inversion pre-compensation technique enabling high-fidelity covert audio transmission through microphone nonlinearity
  • Voice-aware jailbreak generation method optimized for intelligibility, brevity, and transferability across speech interfaces

🛡️ Threat Analysis

Input Manipulation Attack

Uses adversarial acoustic input manipulation (near-ultrasonic encoding with nonlinear demodulation) to inject malicious prompts into speech-driven LLM systems at inference time. The attack crafts physical acoustic inputs that cause misinterpretation by the speech interface, making it an input manipulation attack with a physical attack vector.


Details

Domains
nlpaudiomultimodal
Model Types
llmmultimodal
Threat Tags
black_boxinference_timephysical
Datasets
AdvBench
Applications
speech-driven llmsvoice assistantsspeech interfaces