attack 2026

Dynamic Mask-Based Backdoor Attack Against Vision AI Models: A Case Study on Mushroom Detection

Zeineb Dridi 1, Jihen Bennaceur 2,3, Amine Ben Hassouna 2,4

0 citations · 22 references · arXiv

α

Published on arXiv

2601.18845

Model Poisoning

OWASP ML Top 10 — ML10

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

Dynamic SAM-mask triggers achieve high attack success rates on YOLOv7 while preserving clean accuracy, outperforming static-pattern backdoor injection methods in stealthiness.

Dynamic Mask-Based Backdoor Attack

Novel technique introduced


Deep learning has revolutionized numerous tasks within the computer vision field, including image classification, image segmentation, and object detection. However, the increasing deployment of deep learning models has exposed them to various adversarial attacks, including backdoor attacks. This paper presents a novel dynamic mask-based backdoor attack method, specifically designed for object detection models. We exploit a dataset poisoning technique to embed a malicious trigger, rendering any models trained on this compromised dataset vulnerable to our backdoor attack. We particularly focus on a mushroom detection dataset to demonstrate the practical risks posed by such attacks on critical real-life domains. Our work also emphasizes the importance of creating a detailed backdoor attack scenario to illustrate the significant risks associated with the outsourcing practice. Our approach leverages SAM, a recent and powerful image segmentation AI model, to create masks for dynamic trigger placement, introducing a new and stealthy attack method. Through extensive experimentation, we show that our sophisticated attack scenario maintains high accuracy on clean data with the YOLOv7 object detection model while achieving high attack success rates on poisoned samples. Our approach surpasses traditional methods for backdoor injection, which are based on static and consistent patterns. Our findings underscore the urgent need for robust countermeasures to protect deep learning models from these evolving adversarial threats.


Key Contributions

  • Novel dynamic mask-based backdoor attack using SAM segmentation to generate stealthy, non-static trigger placements in training images
  • Demonstrates practical backdoor risk in a real-world mushroom detection scenario, illustrating supply-chain/outsourcing threat vectors
  • Shows that the dynamic approach surpasses static-trigger backdoor baselines in stealthiness while maintaining high clean-data accuracy on YOLOv7

🛡️ Threat Analysis

Data Poisoning Attack

The attack vector is explicitly dataset poisoning: the attacker compromises the training dataset so that any model trained on it inherits the backdoor, making ML02 directly applicable alongside ML10.

Model Poisoning

The paper's primary contribution is a backdoor/trojan attack that embeds a hidden trigger (dynamic SAM-generated masks) into a model, causing targeted misclassification only when the trigger is present — classic ML10 trojan behavior.


Details

Domains
vision
Model Types
cnntransformer
Threat Tags
training_timetargeteddigital
Datasets
custom mushroom detection dataset
Applications
object detectionmushroom detection