benchmark 2025

Conditional Adversarial Fragility in Financial Machine Learning under Macroeconomic Stress

Samruddhi Baviskar

0 citations · 16 references · arXiv

α

Published on arXiv

2512.19935

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarial perturbations (PGD, ε=0.1) cause nearly twice the predictive degradation during macroeconomic stress regimes compared to calm periods (RAF = 1.97), with disproportionate increases in false negative rates for high-risk cases.

Conditional Adversarial Fragility / Risk Amplification Factor

Novel technique introduced


Machine learning models used in financial decision systems operate in nonstationary economic environments, yet adversarial robustness is typically evaluated under static assumptions. This work introduces Conditional Adversarial Fragility, a regime dependent phenomenon in which adversarial vulnerability is systematically amplified during periods of macroeconomic stress. We propose a regime aware evaluation framework for time indexed tabular financial classification tasks that conditions robustness assessment on external indicators of economic stress. Using volatility based regime segmentation as a proxy for macroeconomic conditions, we evaluate model behavior across calm and stress periods while holding model architecture, attack methodology, and evaluation protocols constant. Baseline predictive performance remains comparable across regimes, indicating that economic stress alone does not induce inherent performance degradation. Under adversarial perturbations, however, models operating during stress regimes exhibit substantially greater degradation across predictive accuracy, operational decision thresholds, and risk sensitive outcomes. We further demonstrate that this amplification propagates to increased false negative rates, elevating the risk of missed high risk cases during adverse conditions. To complement numerical robustness metrics, we introduce an interpretive governance layer based on semantic auditing of model explanations using large language models. Together, these results demonstrate that adversarial robustness in financial machine learning is a regime dependent property and motivate stress aware approaches to model risk assessment in high stakes financial deployments.


Key Contributions

  • Introduces 'Conditional Adversarial Fragility' — the empirical finding that adversarial vulnerability in financial ML is regime-dependent and amplified during macroeconomic stress
  • Proposes a regime-aware evaluation framework with a Risk Amplification Factor (RAF) metric that quantifies stress-period adversarial degradation (RAF = 1.97x in experiments)
  • Incorporates an LLM-assisted semantic audit layer to assess stability of post-hoc model explanations under adversarial stress as a governance signal

🛡️ Threat Analysis

Input Manipulation Attack

Evaluates projected gradient-based adversarial perturbations (PGD, ε=0.1) on tabular financial classification models at inference time, demonstrating that adversarial attack effectiveness is regime-dependent and amplified ~2x during stress periods — the core contribution is measuring and characterizing this input manipulation vulnerability.


Details

Domains
tabularnlp
Model Types
traditional_mlllm
Threat Tags
white_boxinference_timedigital
Applications
credit risk modelingfinancial classificationconsumer credit underwriting