Wenbo Jiang

Papers in Database (5)

defense arXiv Jan 11, 2025 · Jan 2025

DivTrackee versus DynTracker: Promoting Diversity in Anti-Facial Recognition against Dynamic FR Strategy

Wenshu Fan, Minxing Zhang, Hongwei Li et al. · University of Electronic Science and Technology of China · CISPA Helmholtz Center for Information Security +1 more

Introduces adaptive gallery-update attack breaking all AFR defenses, then counters with diverse adversarial perturbations for facial privacy

Input Manipulation Attack vision
PDF Code
attack arXiv Mar 12, 2026 · 25d ago

Delayed Backdoor Attacks: Exploring the Temporal Dimension as a New Attack Surface in Pre-Trained Models

Zikang Ding, Haomiao Yang, Meng Hao et al. · University of Electronic Science and Technology of China · Singapore Management University +2 more

Proposes temporally-delayed backdoor attacks on NLP pre-trained models using common everyday words as stealthy triggers

Model Poisoning nlp
PDF
attack arXiv Aug 6, 2025 · Aug 2025

BadTime: An Effective Backdoor Attack on Multivariate Long-Term Time Series Forecasting

Kunlan Xiang, Haomiao Yang, Meng Hao et al. · University of Electronic Science and Technology of China · Singapore Management University +3 more

Proposes first backdoor attack on multivariate time series forecasting, extending attackable horizon 60× to 720 timesteps via lag-aware distributed triggers

Model Poisoning Data Poisoning Attack timeseries
PDF
attack arXiv Aug 26, 2025 · Aug 2025

Hidden Tail: Adversarial Image Causing Stealthy Resource Consumption in Vision-Language Models

Rui Zhang, Zihan Wang, Tianli Yang et al. · University of Electronic Science and Technology of China · City University of Hong Kong +1 more

Adversarial image attack on VLMs that maximizes output length via hidden special tokens, exhausting inference resources stealthily

Input Manipulation Attack Model Denial of Service visionmultimodalnlp
PDF Code
defense arXiv Aug 2, 2025 · Aug 2025

ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models

Zihan Wang, Rui Zhang, Hongwei Li et al. · University of Electronic Science and Technology of China · City University of Hong Kong

Detects LLM backdoors in real-time by monitoring token confidence windows that reveal the 'sequence lock' phenomenon

Model Poisoning nlp
PDF Code