attack 2026

Adversarial News and Lost Profits: Manipulating Headlines in LLM-Driven Algorithmic Trading

Advije Rizvani 1, Giovanni Apruzzese 1,2, Pavel Laskov 1

0 citations · 103 references · arXiv

α

Published on arXiv

2601.13082

Input Manipulation Attack

OWASP ML Top 10 — ML01

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A single-day adversarial news headline manipulation reduces annual portfolio returns by up to 17.7 percentage points across FinBERT, FinGPT, FinLLaMA, and general-purpose LLMs driving algorithmic trading systems.

Adversarial News Manipulation (homoglyph substitution + hidden-text injection)

Novel technique introduced


Large Language Models (LLMs) are increasingly adopted in the financial domain. Their exceptional capabilities to analyse textual data make them well-suited for inferring the sentiment of finance-related news. Such feedback can be leveraged by algorithmic trading systems (ATS) to guide buy/sell decisions. However, this practice bears the risk that a threat actor may craft "adversarial news" intended to mislead an LLM. In particular, the news headline may include "malicious" content that remains invisible to human readers but which is still ingested by the LLM. Although prior work has studied textual adversarial examples, their system-wide impact on LLM-supported ATS has not yet been quantified in terms of monetary risk. To address this threat, we consider an adversary with no direct access to an ATS but able to alter stock-related news headlines on a single day. We evaluate two human-imperceptible manipulations in a financial context: Unicode homoglyph substitutions that misroute models during stock-name recognition, and hidden-text clauses that alter the sentiment of the news headline. We implement a realistic ATS in Backtrader that fuses an LSTM-based price forecast with LLM-derived sentiment (FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs), and quantify monetary impact using portfolio metrics. Experiments on real-world data show that manipulating a one-day attack over 14 months can reliably mislead LLMs and reduce annual returns by up to 17.7 percentage points. To assess real-world feasibility, we analyze popular scraping libraries and trading platforms and survey 27 FinTech practitioners, confirming our hypotheses. We notified trading platform owners of this security issue.


Key Contributions

  • Demonstrates two human-imperceptible adversarial text attacks on LLM-driven algorithmic trading: Unicode homoglyph substitutions for stock-name misrouting and hidden-text clauses for sentiment inversion
  • Implements a realistic end-to-end ATS in Backtrader fusing LSTM price forecasts with LLM sentiment from FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs, then quantifies monetary impact using portfolio metrics
  • Quantifies real-world feasibility via analysis of scraping libraries, trading platforms, and a survey of 27 FinTech practitioners, showing a single-day attack can reduce annual returns by up to 17.7 percentage points over a 14-month window

🛡️ Threat Analysis

Input Manipulation Attack

The paper crafts human-imperceptible adversarial inputs (Unicode homoglyph substitutions, hidden-text clauses) injected into news documents that cause ML sentiment models (FinBERT, LSTM) to produce incorrect outputs at inference time — this is adversarial document injection for LLM-integrated systems, matching the dual-tagging guideline for adversarial content manipulation of LLM-integrated pipelines.


Details

Domains
nlp
Model Types
llmtransformerrnn
Threat Tags
black_boxinference_timetargeteddigital
Datasets
real-world stock news headlines (14-month financial dataset)
Applications
algorithmic tradingfinancial sentiment analysisllm-driven trading systems