attack 2026

Higher-Order Adversarial Patches for Real-Time Object Detectors

Jens Bayer 1,2, Stefan Becker 1, David Münch 1, Michael Arens 1, Jürgen Beyerer 1,2

0 citations · 24 references · arXiv

α

Published on arXiv

2601.04991

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Higher-order adversarial patches exhibit stronger cross-model and cross-dataset transferability than lower-order patches, and iterative adversarial training fails to adequately defend against them.

Higher-Order Adversarial Patches

Novel technique introduced


Higher-order adversarial attacks can directly be considered the result of a cat-and-mouse game -- an elaborate action involving constant pursuit, near captures, and repeated escapes. This idiom describes the enduring circular training of adversarial attack patterns and adversarial training the best. The following work investigates the impact of higher-order adversarial attacks on object detectors by successively training attack patterns and hardening object detectors with adversarial training. The YOLOv10 object detector is chosen as a representative, and adversarial patches are used in an evasion attack manner. Our results indicate that higher-order adversarial patches are not only affecting the object detector directly trained on but rather provide a stronger generalization capacity compared to lower-order adversarial patches. Moreover, the results highlight that solely adversarial training is not sufficient to harden an object detector efficiently against this kind of adversarial attack. Code: https://github.com/JensBayer/HigherOrder


Key Contributions

  • Introduces the concept of higher-order adversarial patches produced by iteratively cycling between patch optimization and adversarial training in a cat-and-mouse dynamic
  • Demonstrates that higher-order patches generalize better across models and datasets compared to lower-order patches
  • Shows that adversarial training alone is insufficient to harden an object detector against higher-order adversarial patch attacks

🛡️ Threat Analysis

Input Manipulation Attack

Paper proposes and evaluates higher-order adversarial patches — physically realizable crafted inputs designed to suppress object detector confidence at inference time — including adversarial training as a defense that is shown to be insufficient against higher-order patches.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxgrey_boxinference_timephysicaldigitaluntargeted
Applications
object detectionreal-time object detection