attack 2026

On Optimizing Multimodal Jailbreaks for Spoken Language Models

Aravind Krishnan 1,2, Karolina Stańczak 3, Dietrich Klakow 1

0 citations

α

Published on arXiv

2603.19127

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 1.5x to 10x higher jailbreak success rate compared to unimodal attacks when jointly optimizing text and audio inputs on state-of-the-art SLMs

JAMA

Novel technique introduced


As Spoken Language Models (SLMs) integrate speech and text modalities, they inherit the safety vulnerabilities of their LLM backbone and an expanded attack surface. SLMs have been previously shown to be susceptible to jailbreaking, where adversarial prompts induce harmful responses. Yet existing attacks largely remain unimodal, optimizing either text or audio in isolation. We explore gradient-based multimodal jailbreaks by introducing JAMA (Joint Audio-text Multimodal Attack), a joint multimodal optimization framework combining Greedy Coordinate Gradient (GCG) for text and Projected Gradient Descent (PGD) for audio, to simultaneously perturb both modalities. Evaluations across four state-of-the-art SLMs and four audio types demonstrate that JAMA surpasses unimodal jailbreak rate by 1.5x to 10x. We analyze the operational dynamics of this joint attack and show that a sequential approximation method makes it 4x to 6x faster. Our findings suggest that unimodal safety is insufficient for robust SLMs. The code and data are available at https://repos.lsv.uni-saarland.de/akrishnan/multimodal-jailbreak-slm


Key Contributions

  • JAMA framework combining GCG (text) and PGD (audio) for joint multimodal jailbreak optimization
  • Demonstrates 1.5x-10x higher jailbreak success rates compared to unimodal attacks across 4 state-of-the-art SLMs
  • Sequential approximation method (SAMA) achieving 4x-6x speedup with comparable attack success rates

🛡️ Threat Analysis

Input Manipulation Attack

Uses gradient-based adversarial perturbations (PGD for audio, GCG for text) to craft inputs that cause misclassification/harmful outputs at inference time - core adversarial example attack.


Details

Domains
multimodalnlpaudio
Model Types
llmmultimodaltransformer
Threat Tags
white_boxinference_timetargeteddigital
Applications
spoken language modelsmultimodal dialogue systemsspeech-enabled ai assistants