attack 2025

Uncovering the Vulnerability of Large Language Models in the Financial Domain via Risk Concealment

Gang Cheng , Haibo Jin 1, Wenbin Zhang 2, Haohan Wang 1, Jun Zhuang 3

0 citations

α

Published on arXiv

2509.10546

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

RCA achieves 93.18% average attack success rate across 9 mainstream LLMs, including 98.28% on GPT-4.1 and 97.56% on OpenAI o1, outperforming all baselines while requiring fewer tokens.

Risk-Concealment Attacks (RCA)

Novel technique introduced


Large Language Models (LLMs) are increasingly integrated into financial applications, yet existing red-teaming research primarily targets harmful content, largely neglecting regulatory risks. In this work, we aim to investigate the vulnerability of financial LLMs through red-teaming approaches. We introduce Risk-Concealment Attacks (RCA), a novel multi-turn framework that iteratively conceals regulatory risks to provoke seemingly compliant yet regulatory-violating responses from LLMs. To enable systematic evaluation, we construct FIN-Bench, a domain-specific benchmark for assessing LLM safety in financial contexts. Extensive experiments on FIN-Bench demonstrate that RCA effectively bypasses nine mainstream LLMs, achieving an average attack success rate (ASR) of 93.18%, including 98.28% on GPT-4.1 and 97.56% on OpenAI o1. These findings reveal a critical gap in current alignment techniques and underscore the urgent need for stronger moderation mechanisms in financial domains. We hope this work offers practical insights for advancing robust and domain-aware LLM alignment.


Key Contributions

  • Risk-Concealment Attacks (RCA): a multi-turn red-teaming framework that progressively embeds regulatory-violating financial intent within surface-compliant conversational turns
  • Dynamic follow-up generation mechanism that adapts to prior model responses to iteratively cloak high-risk financial objectives
  • FIN-Bench: a domain-specific benchmark for evaluating LLM jailbreak vulnerability under financial regulatory criteria (unlicensed advice, tax evasion, market manipulation)

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
FIN-Bench
Applications
financial advisingregulatory compliance llmscustomer service chatbots