attack arXiv Dec 29, 2025 · Dec 2025
Roee Ziv, Raz Lapid, Moshe Sipper · Ben Gurion University of the Negev · Deepkeep
Universal adversarial audio perturbations attack encoder latent space to hijack audio-LLM outputs without accessing the language model
Input Manipulation Attack Prompt Injection audionlp
Audio-language models combine audio encoders with large language models to enable multimodal reasoning, but they also introduce new security vulnerabilities. We propose a universal targeted latent space attack, an encoder-level adversarial attack that manipulates audio latent representations to induce attacker-specified outputs in downstream language generation. Unlike prior waveform-level or input-specific attacks, our approach learns a universal perturbation that generalizes across inputs and speakers and does not require access to the language model. Experiments on Qwen2-Audio-7B-Instruct demonstrate consistently high attack success rates with minimal perceptual distortion, revealing a critical and previously underexplored attack surface at the encoder level of multimodal systems.
llm transformer multimodal Ben Gurion University of the Negev · Deepkeep