benchmark 2025

Bayesian Evaluation of Large Language Model Behavior

Rachel Longjohn , Shang Wu , Saatvik Kher , Catarina Belém , Padhraic Smyth

1 citations · arXiv

α

Published on arXiv

2511.10661

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Bayesian uncertainty quantification reveals that single-generation deterministic evaluation significantly underestimates variability in LLM safety behaviors like jailbreak refusal rates.


It is increasingly important to evaluate how text generation systems based on large language models (LLMs) behave, such as their tendency to produce harmful output or their sensitivity to adversarial inputs. Such evaluations often rely on a curated benchmark set of input prompts provided to the LLM, where the output for each prompt may be assessed in a binary fashion (e.g., harmful/non-harmful or does not leak/leaks sensitive information), and the aggregation of binary scores is used to evaluate the LLM. However, existing approaches to evaluation often neglect statistical uncertainty quantification. With an applied statistics audience in mind, we provide background on LLM text generation and evaluation, and then describe a Bayesian approach for quantifying uncertainty in binary evaluation metrics. We focus in particular on uncertainty that is induced by the probabilistic text generation strategies typically deployed in LLM-based systems. We present two case studies applying this approach: 1) evaluating refusal rates on a benchmark of adversarial inputs designed to elicit harmful responses, and 2) evaluating pairwise preferences of one LLM over another on a benchmark of open-ended interactive dialogue examples. We demonstrate how the Bayesian approach can provide useful uncertainty quantification about the behavior of LLM-based systems.


Key Contributions

  • Bayesian hierarchical model for quantifying uncertainty in binary LLM behavior metrics induced by stochastic text generation
  • Sequential sampling strategies (Thompson sampling) that reduce evaluation cost by prioritizing informative prompts
  • Two case studies applying the framework: jailbreak refusal rate evaluation and pairwise LLM preference comparison

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
JailbreakBench
Applications
llm safety evaluationjailbreak resistance benchmarkingllm behavioral auditing