T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving
Chen Ma 1, Ningfei Wang 2, Junhao Zheng 1, Qing Guo 3, Qian Wang 4, Qi Alfred Chen 2, Chao Shen 1
Published on arXiv
2511.12956
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Achieves an average physical-world attack success rate of 83.3% across varied real-world conditions including distance, angle, and lighting variation.
DiffSign
Novel technique introduced
Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems, enabling real-time detection of road signs, such as STOP and speed limit signs. While these systems are increasingly integrated into commercial vehicles, recent research has exposed their vulnerability to physical-world adversarial appearance attacks. In such attacks, carefully crafted visual patterns are misinterpreted by TSR models as legitimate traffic signs, while remaining inconspicuous or benign to human observers. However, existing adversarial appearance attacks suffer from notable limitations. Pixel-level perturbation-based methods often lack stealthiness and tend to overfit to specific surrogate models, resulting in poor transferability to real-world TSR systems. On the other hand, text-to-image (T2I) diffusion model-based approaches demonstrate limited effectiveness and poor generalization to out-of-distribution sign types. In this paper, we present DiffSign, a novel T2I-based appearance attack framework designed to generate physically robust, highly effective, transferable, practical, and stealthy appearance attacks against TSR systems. To overcome the limitations of prior approaches, we propose a carefully designed attack pipeline that integrates CLIP-based loss and masked prompts to improve attack focus and controllability. We also propose two novel style customization methods to guide visual appearance and improve out-of-domain traffic sign attack generalization and attack stealthiness. We conduct extensive evaluations of DiffSign under varied real-world conditions, including different distances, angles, light conditions, and sign categories. Our method achieves an average physical-world attack success rate of 83.3%, leveraging DiffSign's high effectiveness in attack transferability.
Key Contributions
- DiffSign attack pipeline combining T2I diffusion, CLIP-based loss, and masked prompts to generate stealthy, physically robust adversarial appearances against TSR systems
- Two novel style customization methods that improve out-of-distribution traffic sign generalization and attack stealthiness
- Extensive physical-world evaluation across distances, angles, lighting conditions, and sign categories achieving 83.3% average attack success rate
🛡️ Threat Analysis
DiffSign crafts adversarial visual inputs at inference time that cause TSR DNNs to misclassify non-sign objects as target traffic signs. The core contribution is a novel adversarial example generation pipeline (T2I diffusion + CLIP-based loss + masked prompts) causing misclassification — a direct Input Manipulation Attack with physical-world deployment.