benchmark 2025

Quantifying Return on Security Controls in LLM Systems

Richard Helder Moulton , Austin O'Brien , John D. Hastings

0 citations · 50 references · arXiv

α

Published on arXiv

2512.15081

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Baseline RAG system with DeepSeek-R1 achieves ≥0.98 attack success rates for PII/injection attacks yielding $313k expected loss; ABAC collapses these to near zero and achieves RoC of 9.83, while NeMo Guardrails achieves only RoC of 0.05.

Return-on-Control (RoC) framework

Novel technique introduced


Although large language models (LLMs) are increasingly used in security-critical workflows, practitioners lack quantitative guidance on which safeguards are worth deploying. This paper introduces a decision-oriented framework and reproducible methodology that together quantify residual risk, convert adversarial probe outcomes into financial risk estimates and return-on-control (RoC) metrics, and enable monetary comparison of layered defenses for LLM-based systems. A retrieval-augmented generation (RAG) service is instantiated using the DeepSeek-R1 model over a corpus containing synthetic personally identifiable information (PII), and subjected to automated attacks with Garak across five vulnerability classes: PII leakage, latent context injection, prompt injection, adversarial attack generation, and divergence. For each (vulnerability, control) pair, attack success probabilities are estimated via Laplace's Rule of Succession and combined with loss triangle distributions, calibrated from public breach-cost data, in 10,000-run Monte Carlo simulations to produce loss exceedance curves and expected losses. Three widely used mitigations, attribute-based access control (ABAC); named entity recognition (NER) redaction using Microsoft Presidio; and NeMo Guardrails, are then compared to a baseline RAG configuration. The baseline system exhibits very high attack success rates (>= 0.98 for PII, latent injection, and prompt injection), yielding a total simulated expected loss of $313k per attack scenario. ABAC collapses success probabilities for PII and prompt-related attacks to near zero and reduces the total expected loss by ~94%, achieving an RoC of 9.83. NER redaction likewise eliminates PII leakage and attains an RoC of 5.97, while NeMo Guardrails provides only marginal benefit (RoC of 0.05).


Key Contributions

  • Decision-oriented framework that converts Garak attack probe outcomes into financial risk estimates and return-on-control (RoC) metrics for LLM systems
  • Monte Carlo simulation methodology (10,000 runs) combining Laplace-smoothed attack success probabilities with loss triangle distributions calibrated from public breach-cost data
  • Empirical comparison of ABAC, NER redaction (Presidio), and NeMo Guardrails on a DeepSeek-R1 RAG system, showing ABAC reduces expected loss by ~94% (RoC=9.83) while NeMo Guardrails provides near-zero benefit (RoC=0.05)

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Datasets
Synthetic PII corpusPublic breach-cost data (loss calibration)
Applications
retrieval-augmented generation (rag)llm-based security-critical workflows