attack 2026

Not All Tokens Are Created Equal: Query-Efficient Jailbreak Fuzzing for LLMs

Wenyu Chen , Xiangtao Meng , Chuanchao Zang , Li Wang , Xinyu Gao , Jianing Wang , Peng Zhan , Zheng Li , Shanqing Guo

0 citations

α

Published on arXiv

2603.23269

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Achieves 90% attack success rate with over 70% fewer queries compared to baselines; under 25-query budget, improves ASR by 20-40%

TriageFuzz

Novel technique introduced


Large Language Models(LLMs) are widely deployed, yet are vulnerable to jailbreak prompts that elicit policy-violating outputs. Although prior studies have uncovered these risks, they typically treat all tokens as equally important during prompt mutation, overlooking the varying contributions of individual tokens to triggering model refusals. Consequently, these attacks introduce substantial redundant searching under query-constrained scenarios, reducing attack efficiency and hindering comprehensive vulnerability assessment. In this work, we conduct a token-level analysis of refusal behavior and observe that token contributions are highly skewed rather than uniform. Moreover, we find strong cross-model consistency in refusal tendencies, enabling the use of a surrogate model to estimate token-level contributions to the target model's refusals. Motivated by these findings, we propose TriageFuzz, a token-aware jailbreak fuzzing framework that adapts the fuzz testing approach with a series of customized designs. TriageFuzz leverages a surrogate model to estimate the contribution of individual tokens to refusal behaviors, enabling the identification of sensitive regions within the prompt. Furthermore, it incorporates a refusal-guided evolutionary strategy that adaptively weights candidate prompts with a lightweight scorer to steer the evolution toward bypassing safety constraints. Extensive experiments on six open-source LLMs and three commercial APIs demonstrate that TriageFuzz achieves comparable attack success rates (ASR) with significantly reduced query costs. Notably, it attains a 90% ASR with over 70% fewer queries compared to baselines. Even under an extremely restrictive budget of 25 queries, TriageFuzz outperforms existing methods, improving ASR by 20-40%.


Key Contributions

  • Token-level analysis revealing skewed contribution distribution and cross-model consistency in refusal behavior
  • TriageFuzz framework using surrogate models to estimate token-level contributions and identify sensitive prompt regions
  • Refusal-guided evolutionary strategy that achieves 90% ASR with 70% fewer queries than baselines

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBench
Applications
chatbotllm safety