Attack Assessment and Augmented Identity Recognition for Human Skeleton Data
Joseph G. Zalameda 1, Megan A. Witherow 1, Alexander M. Glandon 2, Jose Aguilera 1, Khan M. Iftekharuddin 1
Published on arXiv
2603.24232
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves increased robustness to multiple unseen adversarial attacks while maintaining consistent test accuracy with original model trained on real data
Attack-AAIRS
Novel technique introduced
Machine learning models trained on small data sets for security applications are especially vulnerable to adversarial attacks. Person identification from LiDAR based skeleton data requires time consuming and expensive data acquisition for each subject identity. Recently, Assessment and Augmented Identity Recognition for Skeletons (AAIRS) has been used to train Hierarchical Co-occurrence Networks for Person Identification (HCN-ID) with small LiDAR based skeleton data sets. However, AAIRS does not evaluate robustness of HCN-ID to adversarial attacks or inoculate the model to defend against such attacks. Popular perturbation-based approaches to generating adversarial attacks are constrained to targeted perturbations added to real training samples, which is not ideal for inoculating models with small training sets. Thus, we propose Attack-AAIRS, a novel addition to the AAIRS framework. Attack-AAIRS leverages a small real data set and a GAN generated synthetic data set to assess and improve model robustness against unseen adversarial attacks. Rather than being constrained to perturbations of limited real training samples, the GAN learns the distribution of adversarial attack samples that exploit weaknesses in HCN-ID. Attack samples drawn from this distribution augment training for inoculation of the HCN-ID to improve robustness. Ten-fold cross validation of Attack-AAIRS yields increased robustness to unseen attacks- including FGSM, PGD, Additive Gaussian Noise, MI-FGSM, and BIM. The HCN-ID Synthetic Data Quality Score for Attack-AAIRS indicates that generated attack samples are of similar quality to the original benign synthetic samples generated by AAIRS. Furthermore, inoculated models show consistent final test accuracy with the original model trained on real data, demonstrating that our method improves robustness to adversarial attacks without reducing test performance on real data.
Key Contributions
- Attack-AAIRS framework that uses GANs to generate adversarial attack samples from learned attack distribution rather than perturbing limited real samples
- Demonstrates improved robustness against unseen attacks (FGSM, PGD, MI-FGSM, BIM, Gaussian noise) while maintaining test accuracy on benign data
- Addresses adversarial robustness for small training set scenarios in person identification from LiDAR skeleton data
🛡️ Threat Analysis
Paper addresses adversarial perturbation attacks (FGSM, PGD, MI-FGSM, BIM) on skeleton-based classification models and proposes adversarial training as defense.