attack 2026

BadDet+: Robust Backdoor Attacks for Object Detection

Kealan Dunnett 1,2, Reza Arablouei 2, Dimity Miller 1, Volkan Dedeoglu 2, Raja Jurdak 1

0 citations · 25 references · arXiv

α

Published on arXiv

2601.21066

Model Poisoning

OWASP ML Top 10 — ML10

Key Finding

BadDet+ achieves superior synthetic-to-physical trigger transfer over existing RMA and ODA baselines while preserving clean-image detection performance on real-world benchmarks.

BadDet+

Novel technique introduced


Backdoor attacks pose a severe threat to deep learning, yet their impact on object detection remains poorly understood compared to image classification. While attacks have been proposed, we identify critical weaknesses in existing detection-based methods, specifically their reliance on unrealistic assumptions and a lack of physical validation. To bridge this gap, we introduce BadDet+, a penalty-based framework that unifies Region Misclassification Attacks (RMA) and Object Disappearance Attacks (ODA). The core mechanism utilizes a log-barrier penalty to suppress true-class predictions for triggered inputs, resulting in (i) position and scale invariance, and (ii) enhanced physical robustness. On real-world benchmarks, BadDet+ achieves superior synthetic-to-physical transfer compared to existing RMA and ODA baselines while preserving clean performance. Theoretical analysis confirms the proposed penalty acts within a trigger-specific feature subspace, reliably inducing attacks without degrading standard inference. These results highlight significant vulnerabilities in object detection and the necessity for specialized defenses.


Key Contributions

  • Diagnoses evaluation blind spots in existing object detection backdoor attacks, including scale/position assumptions and duplicate detection artifacts in RMA metrics
  • Introduces BadDet+, a log-barrier penalty-based framework that unifies Region Misclassification Attacks and Object Disappearance Attacks with position and scale invariance
  • Demonstrates superior synthetic-to-physical trigger transfer compared to existing RMA and ODA baselines while preserving clean performance

🛡️ Threat Analysis

Model Poisoning

BadDet+ embeds hidden backdoor behavior (trigger-based object misclassification and disappearance) via training-time data poisoning — the core ML10 threat of neural trojans with trigger-activated behavior in object detection models.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
training_timetargeteduntargeteddigitalphysical
Applications
object detectionautonomous drivingadvanced driver-assistance systems