defense 2025

Mitigating Jailbreaks with Intent-Aware LLMs

Wei Jie Yeo 1, Ranjan Satapathy 2, Erik Cambria 1

0 citations

α

Published on arXiv

2508.12072

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Intent-FT reduces all evaluated jailbreak attack success rates below 50% across open-source and proprietary models, while preserving general capabilities and reducing over-refusals on benign instructions.

Intent-FT

Novel technique introduced


Despite extensive safety-tuning, large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions, reflecting a persistent trade-off between safety and task performance. In this work, we propose Intent-FT, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding. By fine-tuning on a targeted set of adversarial instructions, Intent-FT enables LLMs to generalize intent deduction to unseen attacks, thereby substantially improving their robustness. We comprehensively evaluate both parametric and non-parametric attacks across open-source and proprietary models, considering harmfulness from attacks, utility, over-refusal, and impact against white-box threats. Empirically, Intent-FT consistently mitigates all evaluated attack categories, with no single attack exceeding a 50\% success rate -- whereas existing defenses remain only partially effective. Importantly, our method preserves the model's general capabilities and reduces excessive refusals on benign instructions containing superficially harmful keywords. Furthermore, models trained with Intent-FT accurately identify hidden harmful intent in adversarial attacks, and these learned intentions can be effectively transferred to enhance vanilla model defenses. We publicly release our code at https://github.com/wj210/Intent_Jailbreak.


Key Contributions

  • Intent-FT: a lightweight fine-tuning procedure that trains LLMs to explicitly infer the harmful intent of an instruction before responding, generalizing to unseen jailbreak attacks
  • Comprehensive evaluation across parametric (white-box weight manipulation, fine-tuning API) and non-parametric (prompt-crafted, adversarial suffix) attacks, showing no evaluated attack exceeds 50% success rate
  • Demonstrates that intent representations learned by Intent-FT are transferable to enhance vanilla model defenses without re-training

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_timetraining_time
Datasets
AdvBench
Applications
llm safetychatbot safetyinstruction-following models