attack 2025

Thermally Activated Dual-Modal Adversarial Clothing against AI Surveillance Systems

Jiahuan Long 1,2, Tingsong Jiang 1, Hanqing Liu 1,2, Chao Ma 2, Wen Yao 1

0 citations · 65 references · arXiv

α

Published on arXiv

2511.09829

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Thermally activated adversarial clothing achieves >80% evasion success rate against AI surveillance systems across visible and infrared modalities with patch activation within 50 seconds

Thermally Activated Dual-Modal Adversarial Clothing

Novel technique introduced


Adversarial patches have emerged as a popular privacy-preserving approach for resisting AI-driven surveillance systems. However, their conspicuous appearance makes them difficult to deploy in real-world scenarios. In this paper, we propose a thermally activated adversarial wearable designed to ensure adaptability and effectiveness in complex real-world environments. The system integrates thermochromic dyes with flexible heating units to induce visually dynamic adversarial patterns on clothing surfaces. In its default state, the clothing appears as an ordinary black T-shirt. Upon heating via an embedded thermal unit, hidden adversarial patterns on the fabric are activated, allowing the wearer to effectively evade detection across both visible and infrared modalities. Physical experiments demonstrate that the adversarial wearable achieves rapid texture activation within 50 seconds and maintains an adversarial success rate above 80\% across diverse real-world surveillance environments. This work demonstrates a new pathway toward physically grounded, user-controllable anti-AI systems, highlighting the growing importance of proactive adversarial techniques for privacy protection in the age of ubiquitous AI surveillance.


Key Contributions

  • Thermochromic dye + flexible heating unit integration that hides adversarial patterns at ambient temperature and activates them within 50 seconds on demand, enabling covert deployment
  • Dual-modal adversarial attack simultaneously deceiving visible-spectrum (RGB) and infrared camera-based AI detectors
  • Physical experiments demonstrating >80% adversarial success rate across diverse indoor and outdoor real-world surveillance environments

🛡️ Threat Analysis

Input Manipulation Attack

Proposes a physical adversarial patch attack — thermochromic wearable clothing that activates adversarial patterns to cause AI person detectors to miss or misclassify the wearer across both visible and infrared modalities at inference time.


Details

Domains
vision
Model Types
cnn
Threat Tags
black_boxinference_timetargetedphysical
Applications
person detectionpedestrian detectionai surveillance systems