BadTime: An Effective Backdoor Attack on Multivariate Long-Term Time Series Forecasting
Kunlan Xiang 1, Haomiao Yang 1, Meng Hao 2, Wenbo Jiang 1, Haoxin Wang 3, Shiyue Huang 1, Shaofeng Li 4, Yijing Liu 1, Ji Guo 1, Dusit Niyato 5
Published on arXiv
2508.04189
Model Poisoning
OWASP ML Top 10 — ML10
Data Poisoning Attack
OWASP ML Top 10 — ML02
Key Finding
BadTime extends the attackable forecasting horizon from 12 to 720 timesteps (60× improvement), reduces MAE by over 50% on target variables, and boosts stealthiness by more than 3× under anomaly detection compared to SOTA backdoor attacks.
BadTime
Novel technique introduced
Multivariate long-term time series forecasting (MLTSF) models are increasingly deployed in critical domains such as climate, finance, and transportation. Despite their growing importance, the security of MLTSF models against backdoor attacks remains entirely unexplored. To bridge this gap, we propose BadTime, the first effective backdoor attack tailored for MLTSF. BadTime can manipulate hundreds of future predictions toward a target pattern by injecting a subtle trigger. BadTime addresses two key challenges that arise uniquely in MLTSF: (i) the rapid dilution of local triggers over long horizons, and (ii) the extreme sparsity of backdoor signals under stealth constraints. To counter dilution, BadTime leverages inter-variable correlations, temporal lags, and data-driven initialization to design a distributed, lag-aware trigger that ensures effective influence over long-range forecasts. To overcome sparsity, it introduces a hybrid strategy to select valuable poisoned samples and a decoupled backdoor training objective that adaptively adjusts the model's focus on the sparse backdoor signal, ensuring reliable learning at a poisoning rate as low as 1%. Extensive experiments show that BadTime significantly outperforms state-of-the-art (SOTA) backdoor attacks on time series forecasting by extending the attackable horizon from at most 12 timesteps to 720 timesteps (a 60-fold improvement), reducing MAE by over 50% on target variables, and boosting stealthiness by more than 3-fold under anomaly detection.
Key Contributions
- First backdoor attack tailored for MLTSF, extending the attackable forecasting horizon from 12 to 720 timesteps (60× improvement over prior SOTA)
- Distributed, lag-aware trigger design leveraging inter-variable correlations and temporal lag analysis to counteract trigger dilution over long forecasting horizons
- Hybrid poisoned sample selection strategy and decoupled backdoor training objective that enable effective backdoor learning at poisoning rates as low as 1%
🛡️ Threat Analysis
The attack operates via training data poisoning with a hybrid sample selection strategy and decoupled backdoor training objective; the data poisoning mechanism (achieving high attack efficacy at 1% poisoning rate) is a primary novel contribution alongside the backdoor design.
BadTime is a backdoor/trojan attack that injects subtle trigger patterns into training data to manipulate MLTSF model predictions toward a target pattern at inference time — trigger-activated targeted behavior is the defining characteristic of ML10.