benchmark 2026

Security Assessment and Mitigation Strategies for Large Language Models: A Comprehensive Defensive Framework

Taiwo Onitiju , Iman Vakilinia

0 citations

α

Published on arXiv

2603.17123

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Reveals 2.5x variation in LLM security robustness (Gemini-2.5-pro 29.8% vulnerable vs LLaMA-2-70B 11.9%) and defensive framework achieves 68-94% detection across attack categories


Large Language Models increasingly power critical infrastructure from healthcare to finance, yet their vulnerability to adversarial manipulation threatens system integrity and user safety. Despite growing deployment, no comprehensive comparative security assessment exists across major LLM architectures, leaving organizations unable to quantify risk or select appropriately secure LLMs for sensitive applications. This research addresses this gap by establishing a standardized vulnerability assessment framework and developing a multi-layered defensive system to protect against identified threats. We systematically evaluate five widely-deployed LLM families GPT-4, GPT-3.5 Turbo, Claude-3 Haiku, LLaMA-2-70B, and Gemini-2.5-pro against 10,000 adversarial prompts spanning six attack categories. Our assessment reveals critical security disparities, with vulnerability rates ranging from 11.9\% to 29.8\%, demonstrating that LLM capability does not correlate with security robustness. To mitigate these risks, we develop a production-ready defensive framework achieving 83\% average detection accuracy with only 5\% false positives. These results demonstrate that systematic security assessment combined with external defensive measures provides a viable path toward safer LLM deployment in production environments.


Key Contributions

  • Standardized vulnerability assessment framework evaluating 5 major LLM families against 10,000 adversarial prompts across 6 attack categories
  • Comparative security analysis revealing vulnerability rates ranging from 11.9% to 29.8% across different LLM architectures
  • Production-ready multi-layered defensive framework achieving 83% average detection accuracy with 5% false positive rate

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_time
Datasets
10,000 adversarial prompts across 6 attack categories
Applications
healthcarefinancecustomer servicedecision support systems