PHANTOM: PHysical ANamorphic Threats Obstructing Connected Vehicle Mobility
Md Nahid Hasan Shuvo , Moinul Hossain
Published on arXiv
2512.19711
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves over 90% attack success rate under optimal conditions and 60–80% in degraded environments, activating within 6–10 meters; downstream V2X disruption increases Peak Age of Information by 68–89%.
PHANTOM
Novel technique introduced
Connected autonomous vehicles (CAVs) rely on vision-based deep neural networks (DNNs) and low-latency (Vehicle-to-Everything) V2X communication to navigate safely and efficiently. Despite their advances, these systems remain vulnerable to physical adversarial attacks. In this paper, we introduce PHANTOM (PHysical ANamorphic Threats Obstructing connected vehicle Mobility), a novel framework for crafting and deploying perspective-dependent adversarial examples using \textit{anamorphic art}. PHANTOM exploits geometric distortions that appear natural to humans but are misclassified with high confidence by state-of-the-art object detectors. Unlike conventional attacks, PHANTOM operates in black-box settings without model access and demonstrates strong transferability across four diverse detector architectures (YOLOv5, SSD, Faster R-CNN, and RetinaNet). Comprehensive evaluation in CARLA across varying speeds, weather conditions, and lighting scenarios shows that PHANTOM achieves over 90\% attack success rate under optimal conditions and maintains 60-80\% effectiveness even in degraded environments. The attack activates within 6-10 meters of the target, providing insufficient time for safe maneuvering. Beyond individual vehicle deception, PHANTOM triggers network-wide disruption in CAV systems: SUMO-OMNeT++ co-simulation demonstrates that false emergency messages propagate through V2X links, increasing Peak Age of Information by 68-89\% and degrading safety-critical communication. These findings expose critical vulnerabilities in both perception and communication layers of CAV ecosystems.
Key Contributions
- Novel physical adversarial attack leveraging anamorphic art — perspective-dependent geometric distortions that appear natural to humans but fool object detectors — deployable without model access.
- Black-box, cross-architecture transferability demonstrated across YOLOv5, SSD, Faster R-CNN, and RetinaNet in CARLA simulation under varying speed, weather, and lighting conditions.
- System-level impact analysis showing false V2X emergency messages triggered by PHANTOM increase Peak Age of Information by 68–89% in connected vehicle networks (SUMO-OMNeT++ co-simulation).
🛡️ Threat Analysis
PHANTOM crafts physical adversarial examples — perspective-dependent geometric distortions exploiting anamorphic art — that cause misclassification in state-of-the-art object detectors (YOLOv5, SSD, Faster R-CNN, RetinaNet) at inference time, in a black-box, transferable physical setting.