defense 2025

Debiased Dual-Invariant Defense for Adversarially Robust Person Re-Identification

Yuhang Zhou 1, Yanxiang Zhao 1, Zhongyun Hua 1, Zhipu Liu 2, Zhaoquan Gu 1,3, Qing Liao 1,3, Leo Yu Zhang 4

0 citations · arXiv

α

Published on arXiv

2511.09933

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Significantly outperforms existing state-of-the-art adversarial defenses for person ReID on both clean accuracy and PGD robustness across Market-1501 and DukeMTMC benchmarks

DDDefense

Novel technique introduced


Person re-identification (ReID) is a fundamental task in many real-world applications such as pedestrian trajectory tracking. However, advanced deep learning-based ReID models are highly susceptible to adversarial attacks, where imperceptible perturbations to pedestrian images can cause entirely incorrect predictions, posing significant security threats. Although numerous adversarial defense strategies have been proposed for classification tasks, their extension to metric learning tasks such as person ReID remains relatively unexplored. Moreover, the several existing defenses for person ReID fail to address the inherent unique challenges of adversarially robust ReID. In this paper, we systematically identify the challenges of adversarial defense in person ReID into two key issues: model bias and composite generalization requirements. To address them, we propose a debiased dual-invariant defense framework composed of two main phases. In the data balancing phase, we mitigate model bias using a diffusion-model-based data resampling strategy that promotes fairness and diversity in training data. In the bi-adversarial self-meta defense phase, we introduce a novel metric adversarial training approach incorporating farthest negative extension softening to overcome the robustness degradation caused by the absence of classifier. Additionally, we introduce an adversarially-enhanced self-meta mechanism to achieve dual-generalization for both unseen identities and unseen attack types. Experiments demonstrate that our method significantly outperforms existing state-of-the-art defenses.


Key Contributions

  • Diffusion-model-based data resampling strategy to mitigate inter-ID imbalance and intra-ID homogeneity bias in ReID training data
  • Farthest negative extension softening for metric adversarial training that overcomes robustness degradation from the absence of a classifier in metric learning
  • Adversarially-enhanced self-meta mechanism that achieves dual generalization to both unseen identities and unseen attack types

🛡️ Threat Analysis

Input Manipulation Attack

Defends against adversarial input perturbations (white-box PGD and black-box attacks) that cause incorrect predictions in person ReID systems. The defense is adversarial training adapted for metric learning — a direct ML01 defense contribution targeting inference-time input manipulation attacks.


Details

Domains
vision
Model Types
cnndiffusion
Threat Tags
white_boxblack_boxinference_timeuntargeted
Datasets
Market-1501DukeMTMC
Applications
person re-identificationpedestrian trackingsurveillance systems