Detecting Jailbreak Attempts in Clinical Training LLMs Through Automated Linguistic Feature Extraction
Tri Nguyen , Huy Hoang Bao Le , Lohith Srikanth Pentapalli , Laurah Turner , Kelly Cohen
Published on arXiv
2602.13321
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
LLM-derived linguistic features provide an effective and interpretable basis for automated jailbreak detection, achieving strong cross-validation and held-out performance in safety-critical clinical dialogue.
Detecting jailbreak attempts in clinical training large language models (LLMs) requires accurate modeling of linguistic deviations that signal unsafe or off-task user behavior. Prior work on the 2-Sigma clinical simulation platform showed that manually annotated linguistic features could support jailbreak detection. However, reliance on manual annotation limited both scalability and expressiveness. In this study, we extend this framework by using experts' annotations of four core linguistic features (Professionalism, Medical Relevance, Ethical Behavior, and Contextual Distraction) and training multiple general-domain and medical-domain BERT-based LLM models to predict these features directly from text. The most reliable feature regressor for each dimension was selected and used as the feature extractor in a second layer of classifiers. We evaluate a suite of predictive models, including tree-based, linear, probabilistic, and ensemble methods, to determine jailbreak likelihood from the extracted features. Across cross-validation and held-out evaluations, the system achieves strong overall performance, indicating that LLM-derived linguistic features provide an effective basis for automated jailbreak detection. Error analysis further highlights key limitations in current annotations and feature representations, pointing toward future improvements such as richer annotation schemes, finer-grained feature extraction, and methods that capture the evolving risk of jailbreak behavior over the course of a dialogue. This work demonstrates a scalable and interpretable approach for detecting jailbreak behavior in safety-critical clinical dialogue systems.
Key Contributions
- Two-layer detection architecture where fine-tuned BERT-based regressors extract four clinical-domain linguistic features (Professionalism, Medical Relevance, Ethical Behavior, Contextual Distraction) as an interpretable jailbreak signal
- Automation of previously labor-intensive manual annotation with LLM-based feature extraction, enabling scalable jailbreak monitoring in clinical dialogue systems
- Systematic comparison of second-layer classifiers (tree-based, linear, probabilistic, ensemble) using the four-dimensional feature vector for jailbreak likelihood prediction