benchmark 2025

Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents

Julia Bazinska 1, Max Mathys 1, Francesco Casucci 1,2, Mateo Rojas-Carulla 1, Xander Davies 3,4, Alexandra Souly 3, Niklas Pfister 1

1 citations · 25 references · arXiv

α

Published on arXiv

2510.22620

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Evaluation of 34 LLMs shows reasoning capability positively correlates with security while model size shows no significant correlation with security performance.

threat snapshots

Novel technique introduced


AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents. To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level. We apply this framework to construct the $\operatorname{b}^3$ benchmark, a security benchmark based on 194331 unique crowdsourced adversarial attacks. We then evaluate 31 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security. We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.


Key Contributions

  • Threat snapshots framework that isolates specific LLM vulnerability states in agent execution flows, separating LLM-specific risks from traditional software vulnerabilities
  • b³ benchmark comprising 194,331 unique crowdsourced adversarial attacks across 10 threat snapshot categories covering the key agentic security risks
  • Empirical evaluation of 34 backbone LLMs revealing that enhanced reasoning capabilities improve security while model size does not correlate with security

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
b³ benchmark (backbone breaker benchmark, 194,331 crowdsourced adversarial attacks)
Applications
ai agentsllm-powered autonomous systems