CoCoTen: Detecting Adversarial Inputs to Large Language Models through Latent Space Features of Contextual Co-occurrence Tensors
Sri Durga Sai Sowmya Kadali , Evangelos E. Papalexakis
Published on arXiv
2508.02997
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves F1=0.83 with only 0.5% labeled data (96.6% improvement over baselines) and 2.3–128.4x speedup over competing detection methods.
CoCoTen
Novel technique introduced
The widespread use of Large Language Models (LLMs) in many applications marks a significant advance in research and practice. However, their complexity and hard-to-understand nature make them vulnerable to attacks, especially jailbreaks designed to produce harmful responses. To counter these threats, developing strong detection methods is essential for the safe and reliable use of LLMs. This paper studies this detection problem using the Contextual Co-occurrence Matrix, a structure recognized for its efficacy in data-scarce environments. We propose a novel method leveraging the latent space characteristics of Contextual Co-occurrence Matrices and Tensors for the effective identification of adversarial and jailbreak prompts. Our evaluations show that this approach achieves a notable F1 score of 0.83 using only 0.5% of labeled prompts, which is a 96.6% improvement over baselines. This result highlights the strength of our learned patterns, especially when labeled data is scarce. Our method is also significantly faster, speedup ranging from 2.3 to 128.4 times compared to the baseline models.
Key Contributions
- CoCoTen: a jailbreak/adversarial prompt detector using latent space features of Contextual Co-occurrence Matrices and Tensors with tensor decomposition
- Data-scarce effectiveness: achieves F1=0.83 using only 0.5% of labeled prompts, a 96.6% improvement over baselines
- Computational efficiency: 2.3–128.4x speedup over baseline detection models with no heavy training required