α

Published on arXiv

2512.05485

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Extensive evaluation across 14 LLMs reveals systematic jailbreak vulnerabilities and model-specific failure cases, highlighting trade-offs between safety and utility across diverse attack-defense configurations.

TeleAI-Safety

Novel technique introduced


While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against jailbreak and prompt-based attacks remains insufficient. Existing safety evaluation benchmarks and frameworks are often limited by an imbalanced integration of core components (attack, defense, and evaluation methods) and an isolation between flexible evaluation frameworks and standardized benchmarking capabilities. These limitations hinder reliable cross-study comparisons and create unnecessary overhead for comprehensive risk assessment. To address these gaps, we present TeleAI-Safety, a modular and reproducible framework coupled with a systematic benchmark for rigorous LLM safety evaluation. Our framework integrates a broad collection of 19 attack methods (including one self-developed method), 29 defense methods, and 19 evaluation methods (including one self-developed method). With a curated attack corpus of 342 samples spanning 12 distinct risk categories, the TeleAI-Safety benchmark conducts extensive evaluations across 14 target models. The results reveal systematic vulnerabilities and model-specific failure cases, highlighting critical trade-offs between safety and utility, and identifying potential defense patterns for future optimization. In practical scenarios, TeleAI-Safety can be flexibly adjusted with customized attack, defense, and evaluation combinations to meet specific demands. We release our complete code and evaluation results to facilitate reproducible research and establish unified safety baselines.


Key Contributions

  • A modular, reproducible evaluation framework unifying 19 attack methods, 29 defense methods, and 19 evaluation methods for LLM jailbreak safety assessment.
  • A curated attack corpus of 342 samples across 12 distinct risk categories, with systematic evaluation results across 14 target LLMs.
  • Empirical analysis revealing systematic LLM vulnerabilities, model-specific failure modes, and critical safety-utility trade-offs with actionable defense patterns.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timedigital
Datasets
TeleAI-Safety curated corpus (342 samples, 12 risk categories)
Applications
large language modelsllm safety evaluationjailbreak resistance testing