Towards Reliable and Practical LLM Security Evaluations via Bayesian Modelling
Mary Llewellyn 1, Annie Gray 1, Josh Collyer 2,1, Michael Harries 1
Published on arXiv
2510.05709
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Considering output variability produces less definitive vulnerability conclusions, but some attacks reveal notably higher susceptibility in both Transformer and Mamba variants when training data and mathematical ability are held constant.
Bayesian hierarchical model with embedding-space clustering
Novel technique introduced
Before adopting a new large language model (LLM) architecture, it is critical to understand vulnerabilities accurately. Existing evaluations can be difficult to trust, often drawing conclusions from LLMs that are not meaningfully comparable, relying on heuristic inputs or employing metrics that fail to capture the inherent uncertainty. In this paper, we propose a principled and practical end-to-end framework for evaluating LLM vulnerabilities to prompt injection attacks. First, we propose practical approaches to experimental design, tackling unfair LLM comparisons by considering two practitioner scenarios: when training an LLM and when deploying a pre-trained LLM. Second, we address the analysis of experiments and propose a Bayesian hierarchical model with embedding-space clustering. This model is designed to improve uncertainty quantification in the common scenario that LLM outputs are not deterministic, test prompts are designed imperfectly, and practitioners only have a limited amount of compute to evaluate vulnerabilities. We show the improved inferential capabilities of the model in several prompt injection attack settings. Finally, we demonstrate the pipeline to evaluate the security of Transformer versus Mamba architectures. Our findings show that consideration of output variability can suggest less definitive findings. However, for some attacks, we find notably increased Transformer and Mamba-variant vulnerabilities across LLMs with the same training data or mathematical ability.
Key Contributions
- Principled experimental design for fair LLM architecture comparisons that controls for confounding variables (training data, hyperparameters, mathematical ability)
- Bayesian hierarchical model with embedding-space clustering for uncertainty-aware quantification of prompt injection vulnerability under limited compute and imperfect test prompts
- Empirical case study showing that accounting for output variability yields less definitive but more reliable vulnerability findings when comparing Transformer vs. Mamba architectures