attack 2025

Targeted Physical Evasion Attacks in the Near-Infrared Domain

Pascal Zimmer , Simon Lachnit , Alexander Jan Zielinski , Ghassan Karame

0 citations

α

Published on arXiv

2509.02042

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves higher targeted attack success rates than prior infrared methods across real-world conditions at a deployment cost under $50 in seconds, while a segmentation-based detector thwarts it at 99% F1-score.


A number of attacks rely on infrared light sources or heat-absorbing material to imperceptibly fool systems into misinterpreting visual input in various image recognition applications. However, almost all existing approaches can only mount untargeted attacks and require heavy optimizations due to the use-case-specific constraints, such as location and shape. In this paper, we propose a novel, stealthy, and cost-effective attack to generate both targeted and untargeted adversarial infrared perturbations. By projecting perturbations from a transparent film onto the target object with an off-the-shelf infrared flashlight, our approach is the first to reliably mount laser-free targeted attacks in the infrared domain. Extensive experiments on traffic signs in the digital and physical domains show that our approach is robust and yields higher attack success rates in various attack scenarios across bright lighting conditions, distances, and angles compared to prior work. Equally important, our attack is highly cost-effective, requiring less than US\$50 and a few tens of seconds for deployment. Finally, we propose a novel segmentation-based detection that thwarts our attack with an F1-score of up to 99%.


Key Contributions

  • First laser-free targeted adversarial attack in the near-infrared domain using a transparent film and off-the-shelf infrared flashlight costing under $50
  • Demonstrates robustness of the attack across varying lighting conditions, distances, and angles on physical traffic signs
  • Proposes a segmentation-based detection defense achieving up to 99% F1-score against the proposed attack

🛡️ Threat Analysis

Input Manipulation Attack

Proposes adversarial perturbations crafted via infrared light projection to cause misclassification at inference time — both targeted and untargeted evasion attacks in the physical domain on image recognition models.


Details

Domains
vision
Model Types
cnn
Threat Tags
black_boxinference_timetargeteduntargetedphysical
Datasets
traffic sign datasets (digital and physical)
Applications
traffic sign recognitionautonomous drivingimage recognition