attack 2026

TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration

Chunxiao Li , Lijun Li , Jing Shao

0 citations

α

Published on arXiv

2603.22882

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 87.60% attack success rate on GPT-4o and SOTA performance on 11/12 tested VLMs with 23.09% average toxicity reduction for stealth

TreeTeaming

Novel technique introduced


The rapid advancement of Vision-Language Models (VLMs) has brought their safety vulnerabilities into sharp focus. However, existing red teaming methods are fundamentally constrained by an inherent linear exploration paradigm, confining them to optimizing within a predefined strategy set and preventing the discovery of novel, diverse exploits. To transcend this limitation, we introduce TreeTeaming, an automated red teaming framework that reframes strategy exploration from static testing to a dynamic, evolutionary discovery process. At its core lies a strategic Orchestrator, powered by a Large Language Model (LLM), which autonomously decides whether to evolve promising attack paths or explore diverse strategic branches, thereby dynamically constructing and expanding a strategy tree. A multimodal actuator is then tasked with executing these complex strategies. In the experiments across 12 prominent VLMs, TreeTeaming achieves state-of-the-art attack success rates on 11 models, outperforming existing methods and reaching up to 87.60\% on GPT-4o. The framework also demonstrates superior strategic diversity over the union of previously public jailbreak strategies. Furthermore, the generated attacks exhibit an average toxicity reduction of 23.09\%, showcasing their stealth and subtlety. Our work introduces a new paradigm for automated vulnerability discovery, underscoring the necessity of proactive exploration beyond static heuristics to secure frontier AI models.


Key Contributions

  • TreeTeaming framework that dynamically constructs strategy trees for jailbreak discovery via LLM-powered orchestrator
  • Achieves SOTA attack success rates on 11/12 VLMs (up to 87.60% on GPT-4o)
  • Demonstrates superior strategic diversity and 23.09% toxicity reduction compared to existing methods

🛡️ Threat Analysis


Details

Domains
multimodalnlpvision
Model Types
vlmllmmultimodaltransformer
Threat Tags
black_boxinference_timetargeted
Applications
vision-language modelschatbotsmultimodal ai assistants