IoUCert: Robustness Verification for Anchor-based Object Detectors
Benedikt Brückner 1,2, Alejandro J. Mercado 1,2, Yanghao Zhang 1, Panagiotis Kouvaros 1, Alessio Lomuscio 1,2
Published on arXiv
2603.03043
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Enables, for the first time, formal robustness verification of realistic anchor-based object detectors including SSD, YOLOv2, and YOLOv3 against bounded input perturbations.
IoUCert
Novel technique introduced
While formal robustness verification has seen significant success in image classification, scaling these guarantees to object detection remains notoriously difficult due to complex non-linear coordinate transformations and Intersection-over-Union (IoU) metrics. We introduce IoUCert, a novel formal verification framework designed specifically to overcome these bottlenecks in foundational anchor-based object detection architectures. Focusing on the object localisation component in single-object settings, we propose a coordinate transformation that enables our algorithm to circumvent precision-degrading relaxations of non-linear box prediction functions. This allows us to optimise bounds directly with respect to the anchor box offsets which enables a novel Interval Bound Propagation method that derives optimal IoU bounds. We demonstrate that our method enables, for the first time, the robustness verification of realistic, anchor-based models including SSD, YOLOv2, and YOLOv3 variants against various input perturbations.
Key Contributions
- Novel coordinate transformation for anchor-based detectors that avoids precision-degrading relaxations of non-linear bounding box prediction functions
- Interval Bound Propagation method that derives optimal IoU bounds by optimizing directly over anchor box offsets
- First formal robustness verification of realistic anchor-based architectures (SSD, YOLOv2, YOLOv3) against input perturbations
🛡️ Threat Analysis
IoUCert is a certified robustness defense — it formally verifies that anchor-based object detectors (SSD, YOLOv2, YOLOv3) cannot be fooled by bounded input perturbations, directly addressing adversarial example threats at inference time. Certified robustness is explicitly a defense against ML01 threats.