attack 2025

NEXUS: Network Exploration for eXploiting Unsafe Sequences in Multi-Turn LLM Jailbreaks

Javad Rafiei Asl 1, Sidhant Narula 1, Mohammad Ghasemigol 1, Eduardo Blanco 2, Daniel Takabi 1

3 citations · 54 references · EMNLP

α

Published on arXiv

2510.03417

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

NEXUS achieves 94.8% attack success rate on GPT-4o — 10.3% above ActorAttack — and outperforms state-of-the-art methods by 2.1%–19.4% across closed-source and open-source LLMs.

NEXUS (ThoughtNet)

Novel technique introduced


Large Language Models (LLMs) have revolutionized natural language processing but remain vulnerable to jailbreak attacks, especially multi-turn jailbreaks that distribute malicious intent across benign exchanges and bypass alignment mechanisms. Existing approaches often explore the adversarial space poorly, rely on hand-crafted heuristics, or lack systematic query refinement. We present NEXUS (Network Exploration for eXploiting Unsafe Sequences), a modular framework for constructing, refining, and executing optimized multi-turn attacks. NEXUS comprises: (1) ThoughtNet, which hierarchically expands a harmful intent into a structured semantic network of topics, entities, and query chains; (2) a feedback-driven Simulator that iteratively refines and prunes these chains through attacker-victim-judge LLM collaboration using harmfulness and semantic-similarity benchmarks; and (3) a Network Traverser that adaptively navigates the refined query space for real-time attacks. This pipeline uncovers stealthy, high-success adversarial paths across LLMs. On several closed-source and open-source LLMs, NEXUS increases attack success rate by 2.1% to 19.4% over prior methods. Code: https://github.com/inspire-lab/NEXUS


Key Contributions

  • ThoughtNet: a hierarchical semantic network that expands a harmful intent into structured multi-turn query chains across topics, entities, and dialogue paths
  • Feedback-driven Simulator using attacker-victim-judge LLM collaboration with harmfulness and semantic-similarity scoring to iteratively refine and prune attack chains
  • Network Traverser that adaptively navigates the refined adversarial query space for real-time multi-turn jailbreak execution, achieving 94.8% ASR on GPT-4o

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
AdvBenchHarmBench
Applications
llm chatbotsconversational ai safety