defense 2026

NeuroShield: A Neuro-Symbolic Framework for Adversarial Robustness

Ali Shafiee Sarvestani , Jason Schmidt , Arman Roohi

0 citations · 28 references · arXiv

α

Published on arXiv

2601.13162

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

PGD-Neuro-Symbolic model improves adversarial accuracy by 17.35% over standard PGD adversarial training on GTSRB at ε=8/255 without reducing clean accuracy.

NeuroShield

Novel technique introduced


Adversarial vulnerability and lack of interpretability are critical limitations of deep neural networks, especially in safety-sensitive settings such as autonomous driving. We introduce \DesignII, a neuro-symbolic framework that integrates symbolic rule supervision into neural networks to enhance both adversarial robustness and explainability. Domain knowledge is encoded as logical constraints over appearance attributes such as shape and color, and enforced through semantic and symbolic logic losses applied during training. Using the GTSRB dataset, we evaluate robustness against FGSM and PGD attacks at a standard $\ell_\infty$ perturbation budget of $\varepsilon = 8/255$. Relative to clean training, standard adversarial training provides modest improvements in robustness ($\sim$10 percentage points). Conversely, our FGSM-Neuro-Symbolic and PGD-Neuro-Symbolic models achieve substantially larger gains, improving adversarial accuracy by 18.1\% and 17.35\% over their corresponding adversarial-training baselines, representing roughly a three-fold larger robustness gain than standard adversarial training provides when both are measured relative to the same clean-training baseline, without reducing clean-sample accuracy. Compared to transformer-based defenses such as LNL-MoEx, which require heavy architectures and extensive data augmentation, our PGD-Neuro-Symbolic variant attains comparable or superior robustness using a ResNet18 backbone trained for 10 epochs. These results show that symbolic reasoning offers an effective path to robust and interpretable AI.


Key Contributions

  • Neuro-symbolic training framework (NeuroShield) that encodes domain knowledge as logical constraints over appearance attributes (shape, color) enforced via semantic and symbolic logic losses
  • Demonstrates that neuro-symbolic supervision achieves ~18% adversarial accuracy gains over adversarial training baselines on GTSRB — roughly 3× larger than standard adversarial training alone
  • Shows comparable robustness to transformer-based defenses (LNL-MoEx) using only a lightweight ResNet18 trained for 10 epochs

🛡️ Threat Analysis

Input Manipulation Attack

The paper directly proposes and evaluates a defense against adversarial example attacks (FGSM and PGD) at inference time, achieving 17–18 percentage point improvements over standard adversarial training baselines on GTSRB.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxinference_timedigitaluntargeted
Datasets
GTSRB
Applications
traffic sign classificationautonomous driving