FinVault: Benchmarking Financial Agent Safety in Execution-Grounded Environments
Zhi Yang 1, Runguo Li 1, Qiqi Qiang 2, Jiashun Wang 3, Fangqi Lou 1, Mengping Li 1, Dongpo Cheng 1, Rui Xu 4,5, Heng Lian 6,5, Shuo Zhang 7,5, Xiaolong Liang 8, Xiaoming Huang 8, Zheng Wei 8, Zhaowei Liu 1, Xin Guo 1, Huacan Wang 9,5, Ronghao Chen 10,5, Liwen Zhang 1
1 Shanghai University of Finance and Economics
2 The Chinese University of Hong Kong, Shenzhen
3 Shanghai University of International Business and Economics
7 Beijing University of Posts and Telecommunications
8 Tencent
Published on arXiv
2601.07853
Prompt Injection
OWASP LLM Top 10 — LLM01
Excessive Agency
OWASP LLM Top 10 — LLM08
Key Finding
Attack success rates reach up to 50.0% on state-of-the-art LLMs and remain at least 6.7% even for the most robust models, demonstrating that existing safety mechanisms do not transfer to realistic financial agent settings.
FinVault
Novel technique introduced
Financial agents powered by large language models (LLMs) are increasingly deployed for investment analysis, risk assessment, and automated decision-making, where their abilities to plan, invoke tools, and manipulate mutable state introduce new security risks in high-stakes and highly regulated financial environments. However, existing safety evaluations largely focus on language-model-level content compliance or abstract agent settings, failing to capture execution-grounded risks arising from real operational workflows and state-changing actions. To bridge this gap, we propose FinVault, the first execution-grounded security benchmark for financial agents, comprising 31 regulatory case-driven sandbox scenarios with state-writable databases and explicit compliance constraints, together with 107 real-world vulnerabilities and 963 test cases that systematically cover prompt injection, jailbreaking, financially adapted attacks, as well as benign inputs for false-positive evaluation. Experimental results reveal that existing defense mechanisms remain ineffective in realistic financial agent settings, with average attack success rates (ASR) still reaching up to 50.0\% on state-of-the-art models and remaining non-negligible even for the most robust systems (ASR 6.7\%), highlighting the limited transferability of current safety designs and the need for stronger financial-specific defenses. Our code can be found at https://github.com/aifinlab/FinVault.
Key Contributions
- FinVault: first execution-grounded security benchmark for financial LLM agents with 31 regulatory case-driven sandbox scenarios featuring state-writable databases and compliance constraints
- 107 real-world vulnerabilities and 963 test cases spanning prompt injection, jailbreaking, financially adapted attacks, and benign inputs for false-positive evaluation
- Empirical evaluation revealing that current defenses remain inadequate, with average ASR up to 50.0% on SOTA models and a floor of 6.7% even for the most robust systems