benchmark 2025

Large Empirical Case Study: Go-Explore adapted for AI Red Team Testing

Manish Bhatt 1,2, Adrian Wood 3, Idan Habler 4,5, Ammar Al-Kahfah 1,2

0 citations · 19 references · arXiv

α

Published on arXiv

2601.00042

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Seed variance produces an 8x spread in security testing outcomes, dominating all algorithmic parameters; reward shaping caused exploration collapse in 94% of runs with zero verified attacks across all reward-shaped configurations

Adversarial Go-Explore

Novel technique introduced


Production LLM agents with tool-using capabilities require security testing despite their safety training. We adapt Go-Explore to evaluate GPT-4o-mini across 28 experimental runs spanning six research questions. We find that random-seed variance dominates algorithmic parameters, yielding an 8x spread in outcomes; single-seed comparisons are unreliable, while multi-seed averaging materially reduces variance in our setup. Reward shaping consistently harms performance, causing exploration collapse in 94% of runs or producing 18 false positives with zero verified attacks. In our environment, simple state signatures outperform complex ones. For comprehensive security testing, ensembles provide attack-type diversity, whereas single agents optimize coverage within a given attack type. Overall, these results suggest that seed variance and targeted domain knowledge can outweigh algorithmic sophistication when testing safety-trained models.


Key Contributions

  • Adaptation of the Go-Explore RL exploration algorithm for automated LLM agent red-team security testing across 28 experimental runs and 6 research questions
  • Empirical finding that random seed variance (8x outcome spread) dominates all algorithmic parameters, making single-seed comparisons unreliable; multi-seed averaging (~3–4 seeds) materially reduces variance
  • Demonstration that reward shaping consistently harms performance (94% exploration collapse or 18 false positives with zero verified attacks), and that ensembles provide attack-type diversity while single agents optimize within-type coverage

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
GPT-4o-mini (28 experimental runs across 6 research questions)
Applications
llm agents with tool-using capabilitiessafety-trained language models