Adversarial Patch Attack for Ship Detection via Localized Augmentation
Chun Liu 1,2, Panpan Ding 1, Zheng Zheng 2, Hailong Wang 1, Bingqian Zhu 1, Tao Xu 1, Zhigang Han 1, Jiayao Wang 1
Published on arXiv
2508.21472
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Localized augmentation significantly increases adversarial patch attack success rate and cross-model transferability compared to global augmentation baselines on HRSC2016 across three YOLOv5 variants
Localized Augmentation Adversarial Patch
Novel technique introduced
Current ship detection techniques based on remote sensing imagery primarily rely on the object detection capabilities of deep neural networks (DNNs). However, DNNs are vulnerable to adversarial patch attacks, which can lead to misclassification by the detection model or complete evasion of the targets. Numerous studies have demonstrated that data transformation-based methods can improve the transferability of adversarial examples. However, excessive augmentation of image backgrounds or irrelevant regions may introduce unnecessary interference, resulting in false detections of the object detection model. These errors are not caused by the adversarial patches themselves but rather by the over-augmentation of background and non-target areas. This paper proposes a localized augmentation method that applies augmentation only to the target regions, avoiding any influence on non-target areas. By reducing background interference, this approach enables the loss function to focus more directly on the impact of the adversarial patch on the detection model, thereby improving the attack success rate. Experiments conducted on the HRSC2016 dataset demonstrate that the proposed method effectively increases the success rate of adversarial patch attacks and enhances their transferability.
Key Contributions
- Localized augmentation strategy that restricts data transformations to target regions only, preventing background over-augmentation from destabilizing adversarial patch optimization
- Improved adversarial patch transferability across YOLOv5 model variants (YOLOv5-M, YOLOv5-S, YOLOv5-N) on remote sensing ship imagery
- Demonstrates that background over-augmentation in prior methods introduces false detections that corrupt loss signal during patch optimization
🛡️ Threat Analysis
Proposes a novel adversarial patch generation method applying localized data augmentation only to target regions, improving attack success rate and cross-model transferability of adversarial patches against object detection models at inference time.