defense 2026

A Causal Perspective for Enhancing Jailbreak Attack and Defense

Licheng Pan 1, Yunsheng Lu 1, Jiexi Liu 2, Jialing Tao 3, Haozhe Feng 3, Hui Xue , Zhixuan Chu , Kui Ren 1

0 citations · 54 references · arXiv (Cornell University)

α

Published on arXiv

2602.04893

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Identifying causal prompt features via GNN-based causal graph learning yields a Jailbreaking Enhancer that significantly outperforms non-causal baselines on public benchmarks, and a Guardrail Advisor that more accurately extracts malicious intent from obfuscated prompts.

Causal Analyst

Novel technique introduced


Uncovering the mechanisms behind "jailbreaks" in large language models (LLMs) is crucial for enhancing their safety and reliability, yet these mechanisms remain poorly understood. Existing studies predominantly analyze jailbreak prompts by probing latent representations, often overlooking the causal relationships between interpretable prompt features and jailbreak occurrences. In this work, we propose Causal Analyst, a framework that integrates LLMs into data-driven causal discovery to identify the direct causes of jailbreaks and leverage them for both attack and defense. We introduce a comprehensive dataset comprising 35k jailbreak attempts across seven LLMs, systematically constructed from 100 attack templates and 50 harmful queries, annotated with 37 meticulously designed human-readable prompt features. By jointly training LLM-based prompt encoding and GNN-based causal graph learning, we reconstruct causal pathways linking prompt features to jailbreak responses. Our analysis reveals that specific features, such as "Positive Character" and "Number of Task Steps", act as direct causal drivers of jailbreaks. We demonstrate the practical utility of these insights through two applications: (1) a Jailbreaking Enhancer that targets identified causal features to significantly boost attack success rates on public benchmarks, and (2) a Guardrail Advisor that utilizes the learned causal graph to extract true malicious intent from obfuscated queries. Extensive experiments, including baseline comparisons and causal structure validation, confirm the robustness of our causal analysis and its superiority over non-causal approaches. Our results suggest that analyzing jailbreak features from a causal perspective is an effective and interpretable approach for improving LLM reliability. Our code is available at https://github.com/Master-PLC/Causal-Analyst.


Key Contributions

  • Causal Analyst framework combining LLM-based prompt encoding with GNN-based causal graph learning to identify direct causal drivers (e.g., 'Positive Character', 'Number of Task Steps') of jailbreak success across seven LLMs
  • A 35k-attempt jailbreak dataset spanning 100 attack templates, 50 harmful queries, and 37 human-readable prompt features, enabling data-driven causal discovery
  • Two practical applications derived from causal insights: a Jailbreaking Enhancer that targets causal features to raise attack success rates, and a Guardrail Advisor that extracts true malicious intent from obfuscated queries

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmgnn
Threat Tags
black_boxinference_timetargeted
Datasets
Custom 35k jailbreak dataset (7 LLMs, 100 templates, 50 queries)Public jailbreak benchmarks
Applications
llm safetyjailbreak attack enhancementjailbreak defense / prompt filtering