Minghui Li

h-index: 10 251 citations 34 papers (total)

Papers in Database (6)

attack TIFS Oct 9, 2025 · Oct 2025

DarkHash: A Data-Free Backdoor Attack Against Deep Hashing

Ziqi Zhou, Menghao Deng, Yufei Song et al. · Huazhong University of Science and Technology · City University of Macau +1 more

Data-free backdoor attack on deep hashing models using surrogate datasets and topological alignment loss to manipulate image retrieval results

Model Poisoning vision
7 citations PDF
attack IEEE transactions on multimedi... Oct 10, 2025 · Oct 2025

SegTrans: Transferable Adversarial Examples for Segmentation Models

Yufei Song, Ziqi Zhou, Qi Lu et al. · Huazhong University of Science and Technology · Griffith University

Novel transfer attack for segmentation models using local semantic remapping achieves 8.55% higher success than SOTA

Input Manipulation Attack vision
5 citations PDF
attack arXiv Dec 18, 2025 · Dec 2025

Dual-View Inference Attack: Machine Unlearning Amplifies Privacy Exposure

Lulu Xue, Shengshan Hu, Linqiang Qian et al. · Huazhong University of Science and Technology · Tsinghua University +4 more

Novel black-box MIA exploits dual-model access after unlearning to infer membership of retained data via likelihood ratio inference

Membership Inference Attack vision
2 citations PDF
benchmark arXiv Oct 9, 2025 · Oct 2025

Towards Real-World Deepfake Detection: A Diverse In-the-wild Dataset of Forgery Faces

Junyu Shi, Minghui Li, Junguo Zuo et al. · Huazhong University of Science and Technology · Griffith University

Benchmark dataset of 60K+ real-world deepfake faces from 9 commercial platforms exposes failures of existing detectors

Output Integrity Attack vision
PDF Code
defense arXiv Jan 21, 2026 · 10w ago

Erosion Attack for Adversarial Training to Enhance Semantic Segmentation Robustness

Yufei Song, Ziqi Zhou, Menghao Deng et al. · Huazhong University of Science and Technology · National University of Singapore +1 more

Proposes erosion-based adversarial attack on segmentation models that propagates perturbations from low- to high-confidence pixels, used to strengthen adversarial training robustness

Input Manipulation Attack vision
PDF
defense arXiv Jan 28, 2026 · 9w ago

UnlearnShield: Shielding Forgotten Privacy against Unlearning Inversion

Lulu Xue, Shengshan Hu, Wei Lu et al. · Huazhong University of Science and Technology · Institute of Guizhou Aerospace Measuring and Testing Technology +2 more

Defends machine unlearning against inversion attacks that reconstruct erased training data via cosine-space perturbations

Model Inversion Attack vision
PDF