benchmark 2026

When Scanners Lie: Evaluator Instability in LLM Red-Teaming

Lidor Erez , Omer Hofman , Tamir Nizri , Roman Vainshtein

0 citations

α

Published on arXiv

2603.14633

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

22 of 25 attack categories in Garak scanner show evaluator instability; proposed framework improves evaluator accuracy from 72% to 89% while quantifying uncertainty in ASR estimates

Reliability-Aware Evaluation Framework

Novel technique introduced


Automated LLM vulnerability scanners are increasingly used to assess security risks by measuring different attack type success rates (ASR). Yet the validity of these measurements hinges on an often-overlooked component: the evaluator who determines whether an attack has succeeded. In this study, we demonstrate that commonly used open-source scanners exhibit measurement instability that depends on the evaluator component. Consequently, changing the evaluator while keeping the attacks and model outputs constant can significantly alter the reported ASR. To tackle this problem, we present a two-phase, reliability-aware evaluation framework. In the first phase, we quantify evaluator disagreement to identify attack categories where ASR reliability cannot be assumed. In the second phase, we propose a verification-based evaluation method where evaluators are validated by an independent verifier, enabling reliability assessment without relying on extensive human annotation. Applied to the widely used Garak scanner, we observe that 22 of 25 attack categories exhibit evaluator instability, reflected in high disagreement among evaluators. Our approach raises evaluator accuracy from 72% to 89% while enabling selective deployment to control cost and computational overhead. We further quantify evaluator uncertainty in ASR estimates, showing that reported vulnerability scores can vary by up to 33% depending on the evaluator. Our results indicate that the outputs of vulnerability scanners are highly sensitive to the choice of evaluators. Our framework offers a practical approach to quantify unreliable evaluations and enhance the reliability of measurements in automated LLM security assessments.


Key Contributions

  • Demonstrates that commonly used LLM vulnerability scanners exhibit measurement instability depending on the evaluator component, with ASR varying by up to 33%
  • Proposes a two-phase reliability-aware evaluation framework: quantifying evaluator disagreement to identify unreliable attack categories, then using verification-based validation
  • Shows 22 of 25 attack categories in Garak scanner exhibit evaluator instability, and improves evaluator accuracy from 72% to 89%

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
llm red-teamingautomated vulnerability scanning