defense 2025

Risk Assessment and Security Analysis of Large Language Models

Xiaoyan Zhang , Dongyang Lyu , Xiaoqi Li

0 citations

α

Published on arXiv

2508.17329

Output Integrity Attack

OWASP ML Top 10 — ML09

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

The hierarchical defense system successfully identifies concealed role-escape attacks and performs real-time risk scoring, with practical application demonstrated in the financial services domain.

BERT-CRF hierarchical defense with entropy-weighted risk assessment

Novel technique introduced


As large language models (LLMs) expose systemic security challenges in high risk applications, including privacy leaks, bias amplification, and malicious abuse, there is an urgent need for a dynamic risk assessment and collaborative defence framework that covers their entire life cycle. This paper focuses on the security problems of large language models (LLMs) in critical application scenarios, such as the possibility of disclosure of user data, the deliberate input of harmful instructions, or the models bias. To solve these problems, we describe the design of a system for dynamic risk assessment and a hierarchical defence system that allows different levels of protection to cooperate. This paper presents a risk assessment system capable of evaluating both static and dynamic indicators simultaneously. It uses entropy weighting to calculate essential data, such as the frequency of sensitive words, whether the API call is typical, the realtime risk entropy value is significant, and the degree of context deviation. The experimental results show that the system is capable of identifying concealed attacks, such as role escape, and can perform rapid risk evaluation. The paper uses a hybrid model called BERT-CRF (Bidirectional Encoder Representation from Transformers) at the input layer to identify and filter malicious commands. The model layer uses dynamic adversarial training and differential privacy noise injection technology together. The output layer also has a neural watermarking system that can track the source of the content. In practice, the quality of this method, especially important in terms of customer service in the financial industry.


Key Contributions

  • Dynamic risk assessment system using entropy weighting over sensitive-word frequency, API call anomalies, real-time risk entropy, and context deviation to score LLM inputs
  • Hierarchical three-layer defense: BERT-CRF malicious command filter at input, dynamic adversarial training + differential privacy at model layer, and neural watermarking at output layer
  • Demonstrated detection of concealed jailbreak attacks (role escape) with rapid risk evaluation, validated in financial-industry customer service scenarios

🛡️ Threat Analysis

Output Integrity Attack

The output layer explicitly implements a 'neural watermarking system that can track the source of the content' — this is content provenance watermarking of LLM outputs, directly addressing output integrity.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timeblack_box
Applications
llm safetyfinancial customer service chatbots