defense 2025

Adaptive Meta-learning-based Adversarial Training for Robust Automatic Modulation Classification

Amirmohammad Bamdad , Ali Owfi , Fatemeh Afghah

4 citations · 23 references · IEEE International Conference ...

α

Published on arXiv

2501.01620

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

The meta-learning framework provides superior robustness against unseen adversarial attacks with much less online training time than conventional adversarial training of AMC models

Meta-learning-based Adversarial Training (MLAT)

Novel technique introduced


DL-based automatic modulation classification (AMC) models are highly susceptible to adversarial attacks, where even minimal input perturbations can cause severe misclassifications. While adversarially training an AMC model based on an adversarial attack significantly increases its robustness against that attack, the AMC model will still be defenseless against other adversarial attacks. The theoretically infinite possibilities for adversarial perturbations mean that an AMC model will inevitably encounter new unseen adversarial attacks if it is ever to be deployed to a real-world communication system. Moreover, the computational limitations and challenges of obtaining new data in real-time will not allow a full training process for the AMC model to adapt to the new attack when it is online. To this end, we propose a meta-learning-based adversarial training framework for AMC models that substantially enhances robustness against unseen adversarial attacks and enables fast adaptation to these attacks using just a few new training samples, if any are available. Our results demonstrate that this training framework provides superior robustness and accuracy with much less online training time than conventional adversarial training of AMC models, making it highly efficient for real-world deployment.


Key Contributions

  • Meta-learning-based adversarial training framework enabling AMC models to generalize robustness to unseen adversarial attack types not seen during training
  • Few-shot online adaptation mechanism that rapidly fine-tunes the model to novel attacks with minimal new samples and training time
  • Demonstrates superior robustness-accuracy tradeoff with significantly reduced online adaptation cost compared to conventional adversarial training

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial input perturbations (minimal signal perturbations causing misclassification) at inference time by proposing a meta-learning-based adversarial training framework that generalizes to unseen attack types rather than only known attacks.


Details

Domains
timeseries
Model Types
cnn
Threat Tags
inference_timedigitalwhite_boxuntargeted
Applications
automatic modulation classificationwireless signal classification