benchmark 2026

Statistical Estimation of Adversarial Risk in Large Language Models under Best-of-N Sampling

Mingqian Feng 1,2, Xiaodong Liu 2, Weiwei Yang 2, Chenliang Xu 1, Christopher White 2, Jianfeng Gao 2

0 citations · 13 references · arXiv

α

Published on arXiv

2601.22636

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Using only n=100 samples, SABER predicts ASR@1000 with a mean absolute error of 1.66 vs. 12.04 for the naive baseline — an 86.2% reduction in estimation error.

SABER (Scaling-Aware Best-of-N Estimation of Risk)

Novel technique introduced


Large Language Models (LLMs) are typically evaluated for safety under single-shot or low-budget adversarial prompting, which underestimates real-world risk. In practice, attackers can exploit large-scale parallel sampling to repeatedly probe a model until a harmful response is produced. While recent work shows that attack success increases with repeated sampling, principled methods for predicting large-scale adversarial risk remain limited. We propose a scaling-aware Best-of-N estimation of risk, SABER, for modeling jailbreak vulnerability under Best-of-N sampling. We model sample-level success probabilities using a Beta distribution, the conjugate prior of the Bernoulli distribution, and derive an analytic scaling law that enables reliable extrapolation of large-N attack success rates from small-budget measurements. Using only n=100 samples, our anchored estimator predicts ASR@1000 with a mean absolute error of 1.66, compared to 12.04 for the baseline, which is an 86.2% reduction in estimation error. Our results reveal heterogeneous risk scaling profiles and show that models appearing robust under standard evaluation can experience rapid nonlinear risk amplification under parallel adversarial pressure. This work provides a low-cost, scalable methodology for realistic LLM safety assessment. We will release our code and evaluation scripts upon publication to future research.


Key Contributions

  • SABER: a Beta-distribution-based analytic scaling law that extrapolates jailbreak ASR@N from small-budget (n=100) measurements with 86.2% lower MAE than baseline
  • Demonstrates heterogeneous risk-scaling profiles across LLMs — models appearing robust under standard single-shot evaluation can exhibit rapid nonlinear ASR amplification under parallel adversarial pressure
  • Low-cost, principled methodology for realistic LLM safety assessment that does not require large-scale attack budgets

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm safety evaluationjailbreak risk assessment