survey 2025

The Black Tuesday Attack: how to crash the stock market with adversarial examples to financial forecasting models

Thomas Hofweber 1, Jefrey Bergl 1, Ian Reyes 1, Amir Sadovnik 2

0 citations · 17 references · arXiv

α

Published on arXiv

2510.18990

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Argues that adversarial examples to ML-based financial forecasting models represent a significantly underappreciated systemic risk, capable of triggering self-fulfilling market crashes that are difficult to detect because model predictions remain locally accurate

Black Tuesday Attack

Novel technique introduced


We investigate and defend the possibility of causing a stock market crash via small manipulations of individual stock values that together realize an adversarial example to financial forecasting models, causing these models to make the self-fulfilling prediction of a crash. Such a crash triggered by an adversarial example would likely be hard to detect, since the model's predictions would be accurate and the interventions that would cause it are minor. This possibility is a major risk to financial stability and an opportunity for hostile actors to cause great economic damage to an adversary. This threat also exists against individual stocks and the corresponding valuation of individual companies. We outline how such an attack might proceed, what its theoretical basis is, how it can be directed towards a whole economy or an individual company, and how one might defend against it. We conclude that this threat is vastly underappreciated and requires urgent research on how to defend against it.


Key Contributions

  • Identifies and formalizes a novel adversarial example threat to financial stability: small stock value manipulations that cause ML forecasting models to predict crashes, which then become self-fulfilling
  • Connects existing adversarial ML literature to the financial forecasting domain, showing the attack is technically plausible using known techniques
  • Outlines defensive directions against adversarially manipulated financial forecasting inputs, calling for urgent research from investors and regulators

🛡️ Threat Analysis

Input Manipulation Attack

The attack mechanism is adversarial examples — small crafted perturbations to stock input values that fool financial forecasting ML models into predicting a crash at inference time, causing self-fulfilling sell-offs. This is a direct application of input manipulation attacks to a new critical domain.


Details

Domains
timeseries
Model Types
rnncnntransformer
Threat Tags
inference_timedigitaltargeted
Applications
stock market forecastingfinancial index predictionalgorithmic trading