defense 2026

Mitigating Evasion Attacks in Fog Computing Resource Provisioning Through Proactive Hardening

Younes Salmi , Hanna Bogucka

0 citations

α

Published on arXiv

2603.25257

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Adversarial training effectively maintains stability of resource provisioning system against evasion attacks targeting k-means classifier

Adversarial Training for RPS Hardening

Novel technique introduced


This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for resource provisioning in fog networks. The considered k-means algorithm runs two phases iteratively: offline clustering to form clusters of requested workload and online classification of new incoming requests into offline-created clusters. First, we consider an evasion attack against the classifier in the online phase. A threat actor launches an exploratory attack using query-based reverse engineering to discover the Machine Learning (ML) model (the clustering scheme). Then, a passive causative (evasion) attack is triggered in the offline phase. To defend the model, we suggest a proactive method using adversarial training to introduce attack robustness into the classifier. Our results show that our mitigation technique effectively maintains the stability of the resource provisioning system against attacks.


Key Contributions

  • Proactive adversarial training defense for k-means-based resource provisioning systems in fog computing
  • Demonstrates vulnerability of ML-based resource allocation to evasion attacks that overload VMs
  • Shows adversarial training maintains system stability against query-based model extraction followed by evasion attacks

🛡️ Threat Analysis

Input Manipulation Attack

Primary focus is on evasion attacks (adversarial examples) against the k-means classifier in the online phase, manipulating input requests to cause misclassification and overload VMs. The paper explicitly describes crafting adversarial examples using a Fake Trace Generator to manipulate classifier decision boundaries at inference time.


Details

Domains
tabular
Model Types
traditional_ml
Threat Tags
inference_timeblack_boxtargeted
Applications
fog computing resource provisioningvm allocationworkload clustering