attack 2026

Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning

Zhaoqi Wang 1, Zijian Zhang 1, Daqing He 1, Pengtao Kou 1, Xin Li 1, Jiamou Liu 2, Jincheng An 3, Yong Liu 3,4

0 citations · 36 references · arXiv

α

Published on arXiv

2601.05466

Prompt Injection

OWASP LLM Top 10 — LLM01

Insecure Plugin Design

OWASP LLM Top 10 — LLM07

Key Finding

iMIST achieves higher jailbreak effectiveness with lower rejection rates than existing methods by combining tool-call disguise with RL-guided progressive harmfulness escalation

iMIST

Novel technique introduced


Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-step \underline{P}rogre\underline{s}sive \underline{T}ool-disguised Jailbreak Attack), a novel adaptive jailbreak method that synergistically exploits vulnerabilities in current defense mechanisms. iMIST disguises malicious queries as normal tool invocations to bypass content filters, while simultaneously introducing an interactive progressive optimization algorithm that dynamically escalates response harmfulness through multi-turn dialogues guided by real-time harmfulness assessment. Our experiments on widely-used models demonstrate that iMIST achieves higher attack effectiveness, while maintaining low rejection rates. These results reveal critical vulnerabilities in current LLM safety mechanisms and underscore the urgent need for more robust defense strategies.


Key Contributions

  • Tool-Disguised Invocation: first method to exploit LLM tool-calling/function-calling interfaces as a jailbreak vector, disguising malicious queries as legitimate tool parameters to bypass content filters
  • Interactive Progressive Optimization: RL-guided multi-turn dialogue strategy that dynamically escalates response harmfulness using real-time harmfulness assessment as a reward signal
  • Empirical demonstration that iMIST achieves higher attack success and lower rejection rates than prior jailbreak methods on widely-deployed LLMs

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
llm safety systemsconversational aitool-augmented llm deployments