Li Bai

h-index: 3 93 citations 12 papers (total)

Papers in Database (5)

benchmark WWW Sep 23, 2025 · Sep 2025

MER-Inspector: Assessing model extraction risks from an attack-agnostic perspective

Xinwei Zhang, Haibo Hu, Qingqing Ye et al. · Hong Kong Polytechnic University · Ltd.

Proposes NTK-based theoretical metrics to quantify model extraction risk across architectures without assuming a specific attack strategy

Model Theft vision
4 citations PDF
attack arXiv Oct 15, 2025 · Oct 2025

Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts

Li Bai, Qingqing Ye, Xinwei Zhang et al. · The Hong Kong Polytechnic University · PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems +1 more

Efficient shadow model pool via Mixture-of-Experts cuts computational cost of membership inference attacks while preserving attack effectiveness

Membership Inference Attack visionnlp
2 citations 1 influentialPDF
defense arXiv Jan 11, 2026 · 12w ago

United We Defend: Collaborative Membership Inference Defenses in Federated Learning

Li Bai, Junxu Liu, Sen Zhang et al. · The Hong Kong Polytechnic University · PolyU Research Centre for Privacy and Security Technologies in Future Smart Systems

Collaborative FL defense framework that limits local memorization to defeat trajectory-based membership inference attacks

Membership Inference Attack federated-learningvision
PDF Code
attack arXiv Feb 10, 2026 · 7w ago

Understanding and Enhancing Encoder-based Adversarial Transferability against Large Vision-Language Models

Xinwei Zhang, Li Bai, Tianwei Zhang et al. · The Hong Kong Polytechnic University · Nanyang Technological University +1 more

Proposes SGMA, a transferable adversarial visual attack on LVLMs targeting semantically critical regions to disrupt cross-modal grounding

Input Manipulation Attack Prompt Injection visionmultimodalnlp
PDF
attack arXiv Jan 29, 2026 · 9w ago

On the Adversarial Robustness of Large Vision-Language Models under Visual Token Compression

Xinwei Zhang, Hangcheng Liu, Li Bai et al. · The Hong Kong Polytechnic University · Nanyang Technological University +1 more

Proposes CAGE, a compression-aware adversarial attack exposing that token-compressed VLM robustness is systematically overestimated by standard attacks

Input Manipulation Attack visionmultimodal
PDF