Qingqing Ye

h-index: 18 1,296 citations 118 papers (total)

Papers in Database (11)

benchmark WWW Sep 23, 2025 · Sep 2025

MER-Inspector: Assessing model extraction risks from an attack-agnostic perspective

Xinwei Zhang, Haibo Hu, Qingqing Ye et al. · Hong Kong Polytechnic University · Ltd.

Proposes NTK-based theoretical metrics to quantify model extraction risk across architectures without assuming a specific attack strategy

Model Theft vision
4 citations PDF
attack arXiv Oct 15, 2025 · Oct 2025

Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts

Li Bai, Qingqing Ye, Xinwei Zhang et al. · The Hong Kong Polytechnic University · PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems +1 more

Efficient shadow model pool via Mixture-of-Experts cuts computational cost of membership inference attacks while preserving attack effectiveness

Membership Inference Attack visionnlp
2 citations 1 influentialPDF
attack arXiv Sep 27, 2025 · Sep 2025

Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data

Zi Liang, Qingqing Ye, Xuan Liu et al. · The Hong Kong Polytechnic University · University of California +2 more

Proposes VIA, an attack framework that spreads poisoning and backdoor payloads through LLM synthetic data by hijacking benign training samples

Data Poisoning Attack Model Poisoning Training Data Poisoning nlp
2 citations 1 influentialPDF
defense arXiv Nov 11, 2025 · Nov 2025

Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks

Yaxin Xiao, Qingqing Ye, Zi Liang et al. · The Hong Kong Polytechnic University · Huawei Technologies +1 more

Proposes WRK to break existing black-box model watermarks, then introduces CFW watermarking resilient to combined extraction and removal attacks

Model Theft vision
PDF Code
attack arXiv Jan 20, 2026 · 10w ago

Diffusion-Guided Backdoor Attacks in Real-World Reinforcement Learning

Tairan Huang, Qingqing Ye, Yulin Jin et al. · The Hong Kong Polytechnic University

Diffusion-generated floor patch triggers bypass real-world safety control stacks to reliably activate backdoors in RL robot policies

Model Poisoning reinforcement-learningvision
PDF
defense arXiv Nov 29, 2025 · Nov 2025

Adversarial Signed Graph Learning with Differential Privacy

Haobin Ke, Sen Zhang, Qingqing Ye et al. · The Hong Kong Polytechnic University

Defends signed GNNs against link-stealing attacks using adversarial training and differential privacy with node-level guarantees

Membership Inference Attack graph
PDF Code
attack arXiv Nov 12, 2025 · Nov 2025

SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning

Tairan Huang, Yulin Jin, Junxu Liu et al. · The Hong Kong Polytechnic University

Black-box adversarial attack on visual RL agents using GAN and shadow Q-model to minimize environment queries

Input Manipulation Attack visionreinforcement-learning
PDF
defense arXiv Jan 11, 2026 · 12w ago

United We Defend: Collaborative Membership Inference Defenses in Federated Learning

Li Bai, Junxu Liu, Sen Zhang et al. · The Hong Kong Polytechnic University · PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems

Collaborative FL defense framework that limits local memorization to defeat trajectory-based membership inference attacks

Membership Inference Attack federated-learningvision
PDF Code
attack arXiv Jan 29, 2026 · 9w ago

On the Adversarial Robustness of Large Vision-Language Models under Visual Token Compression

Xinwei Zhang, Hangcheng Liu, Li Bai et al. · The Hong Kong Polytechnic University · Nanyang Technological University +1 more

Proposes CAGE, a compression-aware adversarial attack exposing that token-compressed VLM robustness is systematically overestimated by standard attacks

Input Manipulation Attack visionmultimodal
PDF
attack arXiv Feb 10, 2026 · 7w ago

Understanding and Enhancing Encoder-based Adversarial Transferability against Large Vision-Language Models

Xinwei Zhang, Li Bai, Tianwei Zhang et al. · The Hong Kong Polytechnic University · Nanyang Technological University +1 more

Proposes SGMA, a transferable adversarial visual attack on LVLMs targeting semantically critical regions to disrupt cross-modal grounding

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
defense arXiv Jan 29, 2026 · 9w ago

FIT: Defying Catastrophic Forgetting in Continual LLM Unlearning

Xiaoyu Xu, Minxin Du, Kun Fang et al. · The Hong Kong Polytechnic University · Ant Group

Defends continual LLM unlearning of PII, copyright, and harmful content against adversarial recovery via relearning and quantization attacks

Sensitive Information Disclosure nlp
PDF Code