benchmark 2025

Quantifying CBRN Risk in Frontier Models

Divyanshu Kumar , Nitin Aravind Birur , Tanay Baswa , Sahil Agarwal , Prashanth Harshangi

2 citations · 24 references · arXiv

α

Published on arXiv

2510.21133

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Deep Inception attacks achieve 86.0% success across 10 frontier LLMs versus 33.8% for direct requests, exposing fundamental brittleness in current safety alignment mechanisms.

Deep Inception

Novel technique introduced


Frontier Large Language Models (LLMs) pose unprecedented dual-use risks through the potential proliferation of chemical, biological, radiological, and nuclear (CBRN) weapons knowledge. We present the first comprehensive evaluation of 10 leading commercial LLMs against both a novel 200-prompt CBRN dataset and a 180-prompt subset of the FORTRESS benchmark, using a rigorous three-tier attack methodology. Our findings expose critical safety vulnerabilities: Deep Inception attacks achieve 86.0\% success versus 33.8\% for direct requests, demonstrating superficial filtering mechanisms; Model safety performance varies dramatically from 2\% (claude-opus-4) to 96\% (mistral-small-latest) attack success rates; and eight models exceed 70\% vulnerability when asked to enhance dangerous material properties. We identify fundamental brittleness in current safety alignment, where simple prompt engineering techniques bypass safeguards for dangerous CBRN information. These results challenge industry safety claims and highlight urgent needs for standardized evaluation frameworks, transparent safety metrics, and more robust alignment techniques to mitigate catastrophic misuse risks while preserving beneficial capabilities.


Key Contributions

  • Novel 200-prompt CBRN evaluation dataset covering factual recall, process instruction, novel generation, and synthesis guidance across all four CBRN domains
  • Three-tier attack taxonomy (direct requests, obfuscated requests, Deep Inception) for systematic LLM safety evaluation
  • Comprehensive red-teaming of 10 frontier commercial LLMs revealing attack success rates ranging from 2% to 96%, with eight models exceeding 70% vulnerability on enhancement requests

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
FORTRESS benchmarkcustom 200-prompt CBRN dataset
Applications
llm safety evaluationcbrn dual-use risk assessmentai safety alignment