defense 2026
Detection of Adversarial Attacks in Robotic Perception
Ziad Sharawy , Mohammad Nakshbandiand , Sorin Mihai Grigorescu
0 citations
α
Published on arXiv
2603.28594
Input Manipulation Attack
OWASP ML Top 10 — ML01
Deep Neural Networks (DNNs) achieve strong performance in semantic segmentation for robotic perception but remain vulnerable to adversarial attacks, threatening safety-critical applications. While robustness has been studied for image classification, semantic segmentation in robotic contexts requires specialized architectures and detection strategies.
Key Contributions
- Framework for detecting adversarial attacks using pre-trained ResNet-18 and ResNet-50 for semantic segmentation
- Statistical detection metrics combining confidence scores, non-maximal entropy, and kernel density for distinguishing clean from adversarial inputs
- Comparative analysis of network architectures to identify factors enhancing robustness in robotic perception
🛡️ Threat Analysis
Input Manipulation Attack
Paper focuses on detecting adversarial examples targeting semantic segmentation models at inference time - adversarial inputs crafted to cause misclassification in robotic perception.
Details
Domains
vision
Model Types
cnn
Threat Tags
inference_time
Applications
semantic segmentationrobotic perceptionautonomous driving