Sheng Zhong

Papers in Database (2)

attack arXiv Sep 5, 2025 · Sep 2025

On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy

Zijian Wang, Wei Tong, Tingxuan Han et al. · Nanjing University

Proposes adaptive model poisoning attacks that evade robust aggregation defenses in LDP-protected federated learning systems

Data Poisoning Attack federated-learning
PDF Code
defense arXiv Mar 2, 2026 · 5w ago

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Yu Lin, Qizhi Zhang, Wenqiang Ruan et al. · ByteDance · Nanjing University

Defends user input privacy in cloud LLM inference by obfuscating activations to resist internal state inversion attacks

Model Inversion Attack Sensitive Information Disclosure nlp
PDF