From Decoupled to Coupled: Robustness Verification for Learning-based Keypoint Detection with Joint Specifications
Published on arXiv
2603.05604
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
The coupled verification framework achieves higher verified rates than decoupled classification-style methods and remains effective under strict error thresholds where decoupled approaches fail.
Coupled MILP Robustness Verification
Novel technique introduced
Keypoint detection underpins many vision tasks, including pose estimation, viewpoint recovery, and 3D reconstruction, yet modern neural models remain vulnerable to small input perturbations. Despite its importance, formal robustness verification for keypoint detectors is largely unexplored due to high-dimensional inputs and continuous coordinate outputs. We propose the first coupled robustness verification framework for heatmap-based keypoint detectors that bounds the joint deviation across all keypoints, capturing their interdependencies and downstream task requirements. Unlike prior decoupled, classification-style approaches that verify each keypoint independently and yield conservative guarantees, our method verifies collective behavior. We formulate verification as a falsification problem using a mixed-integer linear program (MILP) that combines reachable heatmap sets with a polytope encoding joint deviation constraints. Infeasibility certifies robustness, while feasibility provides counterexamples, and we prove the method is sound: if it certifies the model as robust, then the keypoint detection model is guaranteed to be robust. Experiments show that our coupled approach achieves high verified rates and remains effective under strict error thresholds where decoupled methods fail.
Key Contributions
- First coupled robustness verification framework for keypoint detectors that bounds joint deviation across all keypoints simultaneously, capturing inter-keypoint dependencies
- MILP formulation combining reachable heatmap sets with a polytope encoding joint deviation constraints, where infeasibility certifies robustness
- Soundness proof: certification by the framework guarantees true model robustness, unlike prior decoupled per-keypoint approaches which yield overly conservative bounds
🛡️ Threat Analysis
Directly defends against adversarial input perturbations by providing formal certified robustness guarantees for heatmap-based keypoint detectors — certification/verification is a canonical ML01 defense technique.