ReLATE+: Unified Framework for Adversarial Attack Detection, Classification, and Resilient Model Selection in Time-Series Classification
Cagla Ipek Kocal 1, Onat Gungor 1, Tajana Rosing 2, Baris Aksanli 2
Published on arXiv
2508.19456
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
ReLATE+ reduces computational overhead by an average of 77.68% while maintaining classification performance within 2.02% of Oracle across diverse time-series domains under adversarial attack conditions.
ReLATE+
Novel technique introduced
Minimizing computational overhead in time-series classification, particularly in deep learning models, presents a significant challenge due to the high complexity of model architectures and the large volume of sequential data that must be processed in real time. This challenge is further compounded by adversarial attacks, emphasizing the need for resilient methods that ensure robust performance and efficient model selection. To address this challenge, we propose ReLATE+, a comprehensive framework that detects and classifies adversarial attacks, adaptively selects deep learning models based on dataset-level similarity, and thus substantially reduces retraining costs relative to conventional methods that do not leverage prior knowledge, while maintaining strong performance. ReLATE+ first checks whether the incoming data is adversarial and, if so, classifies the attack type, using this insight to identify a similar dataset from a repository and enable the reuse of the best-performing associated model. This approach ensures strong performance while reducing the need for retraining, and it generalizes well across different domains with varying data distributions and feature spaces. Experiments show that ReLATE+ reduces computational overhead by an average of 77.68%, enhancing adversarial resilience and streamlining robust model selection, all without sacrificing performance, within 2.02% of Oracle.
Key Contributions
- Adversarial attack detection and multi-class attack-type classification pipeline for time-series data
- Dataset-similarity-based resilient model selection that reuses pre-evaluated models from a repository under adversarial conditions
- 77.68% reduction in computational overhead compared to conventional retraining approaches while staying within 2.02% of Oracle performance
🛡️ Threat Analysis
The framework defends against adversarial input manipulation attacks on time-series classification models — detecting whether inputs are adversarially perturbed at inference time and classifying the attack type to enable resilient model selection.