defense 2025

Keep the Lights On, Keep the Lengths in Check: Plug-In Adversarial Detection for Time-Series LLMs in Energy Forecasting

Hua Ma 1, Ruoxi Sun 1, Minhui Xue 1, Xingliang Yuan 2, Carsten Rudolph 3, Surya Nepal 1, Ling Liu 4

0 citations · 69 references · arXiv

α

Published on arXiv

2512.12154

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

The proposed plug-in detection framework achieves strong and robust adversarial example detection performance under both black-box and white-box attack scenarios across three TS-LLMs and three energy datasets.

Sampling-Induced Divergence Detection

Novel technique introduced


Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large language models (TS-LLMs) now deliver this capability at scale, requiring no task-specific retraining, and are quickly becoming essential components within the Internet-of-Energy (IoE) ecosystem. However, their real-world deployment is complicated by a critical vulnerability: adversarial examples (AEs). Detecting these AEs is challenging because (i) adversarial perturbations are optimized across the entire input sequence and exploit global temporal dependencies, which renders local detection methods ineffective, and (ii) unlike traditional forecasting models with fixed input dimensions, TS-LLMs accept sequences of variable length, increasing variability that complicates detection. To address these challenges, we propose a plug-in detection framework that capitalizes on the TS-LLM's own variable-length input capability. Our method uses sampling-induced divergence as a detection signal. Given an input sequence, we generate multiple shortened variants and detect AEs by measuring the consistency of their forecasts: Benign sequences tend to produce stable predictions under sampling, whereas adversarial sequences show low forecast similarity, because perturbations optimized for a full-length sequence do not transfer reliably to shorter, differently-structured subsamples. We evaluate our approach on three representative TS-LLMs (TimeGPT, TimesFM, and TimeLLM) across three energy datasets: ETTh2 (Electricity Transformer Temperature), NI (Hourly Energy Consumption), and Consumption (Hourly Electricity Consumption and Production). Empirical results confirm strong and robust detection performance across both black-box and white-box attack scenarios, highlighting its practicality as a reliable safeguard for TS-LLM forecasting in real-world energy systems.


Key Contributions

  • Plug-in adversarial detection framework that exploits TS-LLMs' variable-length input capability to detect adversarial examples without model retraining
  • Sampling-induced divergence signal: benign sequences produce stable forecasts under length subsampling, while adversarial perturbations optimized for full-length inputs degrade under shortened variants
  • Empirical validation across three TS-LLMs (TimeGPT, TimesFM, TimeLLM) and three real-world energy datasets under both black-box and white-box threat models

🛡️ Threat Analysis

Input Manipulation Attack

The paper proposes a defense (plug-in detection framework) against gradient-optimized adversarial perturbations on time-series inputs to TS-LLMs at inference time, covering both black-box and white-box attack scenarios — classic input manipulation attack defense.


Details

Domains
timeseriesnlp
Model Types
llm
Threat Tags
white_boxblack_boxinference_time
Datasets
ETTh2NI (Hourly Energy Consumption)Consumption (Hourly Electricity Consumption and Production)
Applications
energy forecastingtime-series forecastinginternet-of-energy systems