attack 2025

A Single Set of Adversarial Clothes Breaks Multiple Defense Methods in the Physical World

Wei Zhang 1, Zhanhao Hu 2, Xiao Li 1, Xiaopei Zhu 1, Xiaolin Hu 1,3

0 citations · 40 references · arXiv

α

Published on arXiv

2510.17322

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

A single physically-printed adversarial clothing set achieves 96.06% ASR against undefended Faster R-CNN and exceeds 64.84% ASR against all nine defended models in real-world physical experiments.

Adversarial Clothes (ensemble-based adversarial texture via 3D rendering pipeline)

Novel technique introduced


In recent years, adversarial attacks against deep learning-based object detectors in the physical world have attracted much attention. To defend against these attacks, researchers have proposed various defense methods against adversarial patches, a typical form of physically-realizable attack. However, our experiments showed that simply enlarging the patch size could make these defense methods fail. Motivated by this, we evaluated various defense methods against adversarial clothes which have large coverage over the human body. Adversarial clothes provide a good test case for adversarial defense against patch-based attacks because they not only have large sizes but also look more natural than a large patch on humans. Experiments show that all the defense methods had poor performance against adversarial clothes in both the digital world and the physical world. In addition, we crafted a single set of clothes that broke multiple defense methods on Faster R-CNN. The set achieved an Attack Success Rate (ASR) of 96.06% against the undefended detector and over 64.84% ASRs against nine defended models in the physical world, unveiling the common vulnerability of existing adversarial defense methods against adversarial clothes. Code is available at: https://github.com/weiz0823/adv-clothes-break-multiple-defenses.


Key Contributions

  • Demonstrates that enlarging adversarial patch size systematically defeats existing patch defense methods, motivating adversarial clothes as a scalable physical attack vector.
  • Evaluates nine SOTA adversarial patch defenses against adversarial clothing textures in both digital and physical worlds, showing all fail against large-coverage texture attacks.
  • Crafts a single set of adversarial clothes via ensemble optimization over defended models, achieving 96.06% ASR against undefended Faster R-CNN and >64.84% ASR against all nine defended models in the physical world.

🛡️ Threat Analysis

Input Manipulation Attack

Core contribution is crafting adversarial texture-based inputs (clothing) that cause object detectors to fail at inference time in the physical world, including adaptive attacks against defended models via ensemble optimization — a direct adversarial evasion attack.


Details

Domains
vision
Model Types
cnn
Threat Tags
white_boxgrey_boxinference_timetargetedphysicaldigital
Datasets
Inria person dataset
Applications
person detectionobject detection