attack 2025

Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography

Songze Li 1, Jiameng Cheng 1, Yiming Li 2, Xiaojun Jia 2, Dacheng Tao 2

3 citations · arXiv

α

Published on arXiv

2512.20168

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves up to 99% attack success rate against GPT-4o, Gemini-2.0-pro, Gemini-2.0-flash, and Grok-3 by hiding both malicious queries and harmful responses in steganographic images that bypass content filters.

Odysseus

Novel technique introduced


By integrating language understanding with perceptual modalities such as images, multimodal large language models (MLLMs) constitute a critical substrate for modern AI systems, particularly intelligent agents operating in open and interactive environments. However, their increasing accessibility also raises heightened risks of misuse, such as generating harmful or unsafe content. To mitigate these risks, alignment techniques are commonly applied to align model behavior with human values. Despite these efforts, recent studies have shown that jailbreak attacks can circumvent alignment and elicit unsafe outputs. Currently, most existing jailbreak methods are tailored for open-source models and exhibit limited effectiveness against commercial MLLM-integrated systems, which often employ additional filters. These filters can detect and prevent malicious input and output content, significantly reducing jailbreak threats. In this paper, we reveal that the success of these safety filters heavily relies on a critical assumption that malicious content must be explicitly visible in either the input or the output. This assumption, while often valid for traditional LLM-integrated systems, breaks down in MLLM-integrated systems, where attackers can leverage multiple modalities to conceal adversarial intent, leading to a false sense of security in existing MLLM-integrated systems. To challenge this assumption, we propose Odysseus, a novel jailbreak paradigm that introduces dual steganography to covertly embed malicious queries and responses into benign-looking images. Extensive experiments on benchmark datasets demonstrate that our Odysseus successfully jailbreaks several pioneering and realistic MLLM-integrated systems, achieving up to 99% attack success rate. It exposes a fundamental blind spot in existing defenses, and calls for rethinking cross-modal security in MLLM-integrated systems.


Key Contributions

  • Dual steganography paradigm (Odysseus) that covertly embeds malicious queries into input images and instructs the MLLM to embed harmful responses into output images, creating a covert channel invisible to content filters.
  • Reveals a fundamental blind spot in existing MLLM safety filters: the assumption that malicious content must be explicitly visible in inputs or outputs fails when steganography is used across modalities.
  • Demonstrates up to 99% attack success rate against commercial systems including GPT-4o, Gemini-2.0-pro, Gemini-2.0-flash, and Grok-3.

🛡️ Threat Analysis

Input Manipulation Attack

Strategically crafted visual inputs (steganographically modified images) are fed to VLMs to manipulate their outputs and bypass safety mechanisms — adversarial visual inputs to VLMs triggering jailbroken behavior, warranting ML01 alongside LLM01 per dual-tagging guidelines.


Details

Domains
visionnlpmultimodal
Model Types
vlmllmmultimodal
Threat Tags
black_boxinference_timetargeteddigital
Applications
multimodal ai assistantscommercial mllm-integrated systemsvisual question answering