benchmark 2026

Comparison requires valid measurement: Rethinking attack success rate comparisons in AI red teaming

Alexandra Chouldechova 1, A. Feder Cooper 2, Solon Barocas 1, Abhinav Palia 2, Dan Vann 1, Hanna Wallach 1

1 citations · 50 references · arXiv

α

Published on arXiv

2601.18076

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Many ASR comparisons in jailbreaking and AI red teaming are not meaningfully interpretable due to apples-to-oranges comparisons and measurement validity failures, undermining published conclusions about relative system safety and attack efficacy.


We argue that conclusions drawn about relative system safety or attack method efficacy via AI red teaming are often not supported by evidence provided by attack success rate (ASR) comparisons. We show, through conceptual, theoretical, and empirical contributions, that many conclusions are founded on apples-to-oranges comparisons or low-validity measurements. Our arguments are grounded in asking a simple question: When can attack success rates be meaningfully compared? To answer this question, we draw on ideas from social science measurement theory and inferential statistics, which, taken together, provide a conceptual grounding for understanding when numerical values obtained through the quantification of system attributes can be meaningfully compared. Through this lens, we articulate conditions under which ASRs can and cannot be meaningfully compared. Using jailbreaking as a running example, we provide examples and extensive discussion of apples-to-oranges ASR comparisons and measurement validity challenges.


Key Contributions

  • Identifies conditions under which attack success rate (ASR) comparisons are and are not meaningful in AI red teaming, grounded in social science measurement theory
  • Demonstrates that many existing jailbreaking ASR comparisons constitute apples-to-oranges comparisons or low-validity measurements through conceptual, theoretical, and empirical analysis
  • Provides a principled framework from inferential statistics and measurement theory for understanding when quantified LLM safety attributes can be legitimately compared

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
llm safety evaluationai red teamingjailbreak benchmarking