GAMBIT: A Gamified Jailbreak Framework for Multimodal Large Language Models
Xiangdong Hu 1, Ya Jiang 1, Qin Hu 1, Xiaojun Jia 2
Published on arXiv
2601.03416
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
GAMBIT achieves 92.13% attack success rate on Gemini 2.5 Flash, 91.20% on QvQ-MAX, and 85.87% on GPT-4o, significantly outperforming visual obfuscation baselines especially on reasoning-capable models.
GAMBIT
Novel technique introduced
Multimodal Large Language Models (MLLMs) have become widely deployed, yet their safety alignment remains fragile under adversarial inputs. Previous work has shown that increasing inference steps can disrupt safety mechanisms and lead MLLMs to generate attacker-desired harmful content. However, most existing attacks focus on increasing the complexity of the modified visual task itself and do not explicitly leverage the model's own reasoning incentives. This leads to them underperforming on reasoning models (Models with Chain-of-Thoughts) compared to non-reasoning ones (Models without Chain-of-Thoughts). If a model can think like a human, can we influence its cognitive-stage decisions so that it proactively completes a jailbreak? To validate this idea, we propose GAMBI} (Gamified Adversarial Multimodal Breakout via Instructional Traps), a novel multimodal jailbreak framework that decomposes and reassembles harmful visual semantics, then constructs a gamified scene that drives the model to explore, reconstruct intent, and answer as part of winning the game. The resulting structured reasoning chain increases task complexity in both vision and text, positioning the model as a participant whose goal pursuit reduces safety attention and induces it to answer the reconstructed malicious query. Extensive experiments on popular reasoning and non-reasoning MLLMs demonstrate that GAMBIT achieves high Attack Success Rates (ASR), reaching 92.13% on Gemini 2.5 Flash, 91.20% on QvQ-MAX, and 85.87% on GPT-4o, significantly outperforming baselines.
Key Contributions
- GAMBIT framework that decomposes harmful visual semantics, shuffles images with masked keywords, and wraps the query in a gamified competitive scenario to manipulate the model's cognitive decision process toward answering malicious queries
- Psychology-inspired gamified scene construction strategy that positions the MLLM as a game participant, exploiting goal-directed reasoning to suppress safety attention during chain-of-thought reasoning
- Empirical demonstration that GAMBIT outperforms visual obfuscation baselines on both reasoning (CoT) and non-reasoning MLLMs, achieving 92.13% ASR on Gemini 2.5 Flash and 91.20% on QvQ-MAX