Adversarial Camouflage
Paweł Borsukiewicz , Daniele Lunghi , Melissa Tessa , Jacques Klein , Tegawendé F. Bissyandé
Published on arXiv
2603.21867
Input Manipulation Attack
OWASP ML Top 10 — ML01
Key Finding
Significantly degrades performance of state-of-the-art face recognition models in simulations with promising real-world results; reveals high cross-model transferability and differences in robustness between CNNs and vision transformers
Adversarial Camouflage
Novel technique introduced
While the rapid development of facial recognition algorithms has enabled numerous beneficial applications, their widespread deployment has raised significant concerns about the risks of mass surveillance and threats to individual privacy. In this paper, we introduce \textit{Adversarial Camouflage} as a novel solution for protecting users' privacy. This approach is designed to be efficient and simple to reproduce for users in the physical world. The algorithm starts by defining a low-dimensional pattern space parameterized by color, shape, and angle. Optimized patterns, once found, are projected onto semantically valid facial regions for evaluation. Our method maximizes recognition error across multiple architectures, ensuring high cross-model transferability even against black-box systems. It significantly degrades the performance of all tested state-of-the-art face recognition models during simulations and demonstrates promising results in real-world human experiments, while revealing differences in model robustness and evidence of attack transferability across architectures.
Key Contributions
- Novel adversarial camouflage approach using optimized face-painting patterns parameterized by color, shape, and angle for real-world facial recognition evasion
- Evaluation pipeline using diffusion models to simulate pattern application and test cross-model transferability without physical prototyping
- Extensive real-world validation with 1120 photos across 20 users and 3 patterns, demonstrating effectiveness against CNNs with mixed results against vision transformers
🛡️ Threat Analysis
Creates adversarial patterns (patches) optimized via gradient ascent to cause misclassification/recognition failure in facial recognition models at inference time. This is a physical adversarial attack designed to evade detection in the real world.