Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
John Hawkins 1, Aditya Pramar 1, Rodney Beard 1, Rohitash Chandra 2
Published on arXiv
2510.01644
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Fine-tuning BERT end-to-end outperforms other ML approaches for jailbreak detection; explicit reflexivity in prompt structure is identified as a key discriminating feature of jailbreak prompts.
Fine-tuned BERT jailbreak detector
Novel technique introduced
Large Language Models (LLMs) suffer from a range of vulnerabilities that allow malicious users to solicit undesirable responses through manipulation of the input text. These so-called jailbreak prompts are designed to trick the LLM into circumventing the safety guardrails put in place to keep responses acceptable to the developer's policies. In this study, we analyse the ability of different machine learning models to distinguish jailbreak prompts from genuine uses, including looking at our ability to identify jailbreaks that use previously unseen strategies. Our results indicate that using current datasets the best performance is achieved by fine tuning a Bidirectional Encoder Representations from Transformers (BERT) model end-to-end for identifying jailbreaks. We visualise the keywords that distinguish jailbreak from genuine prompts and conclude that explicit reflexivity in prompt structure could be a signal of jailbreak intention.
Key Contributions
- Comparative evaluation of multiple ML classifiers for distinguishing jailbreak prompts from genuine user inputs, with fine-tuned BERT achieving the best performance
- Analysis of generalization to novel, previously unseen jailbreak strategies using held-out semantic categories
- Keyword visualization identifying explicit reflexivity in prompt structure as a distinguishing signal of jailbreak intent