Latest papers

6 papers
defense arXiv Mar 11, 2026 · 26d ago

Repurposing Backdoors for Good: Ephemeral Intrinsic Proofs for Verifiable Aggregation in Cross-silo Federated Learning

Xian Qin, Xue Yang, Xiaohu Tang · Southwest Jiaotong University

Repurposes backdoor injection as ephemeral verification signals to detect malicious server aggregation manipulation in federated learning

Data Poisoning Attack federated-learning
PDF
defense arXiv Sep 15, 2025 · Sep 2025

Efficient Byzantine-Robust Privacy-Preserving Federated Learning via Dimension Compression

Xian Qin, Xue Yang, Xiaohu Tang · Southwest Jiaotong University

Defends federated learning against Byzantine poisoning and gradient inversion attacks via JL-compressed homomorphic encryption with 25-35x overhead reduction

Data Poisoning Attack Model Inversion Attack federated-learning
PDF
survey arXiv Sep 2, 2025 · Sep 2025

A Survey: Towards Privacy and Security in Mobile Large Language Models

Honghui Xu, Kaiyang Li, Wei Chen et al. · Kennesaw State University · Georgia State University +2 more

Surveys privacy and security threats to mobile LLMs: adversarial attacks, membership inference, side-channel leakage, and defenses

Input Manipulation Attack Membership Inference Attack Prompt Injection Sensitive Information Disclosure nlp
PDF
attack arXiv Aug 4, 2025 · Aug 2025

Hidden in the Noise: Unveiling Backdoors in Audio LLMs Alignment through Latent Acoustic Pattern Triggers

Liang Lin, Miao Yu, Kaiwen Luo et al. · Chinese Academy of Sciences · University of Science and Technology of China +4 more

Backdoor attack on Audio LLMs using acoustic triggers like noise and speech rate achieves >90% ASR at just 3% poisoning ratio

Model Poisoning audionlp
PDF Code
defense arXiv Jan 9, 2025 · Jan 2025

A New Perspective on Privacy Protection in Federated Learning with Granular-Ball Computing

Guannan Lai, Yihui Feng, Xin Yang et al. · Southwestern University of Finance and Economics · Chongqing University of Posts and Telecommunications +1 more

Defends federated learning against gradient reconstruction attacks by transforming images into coarse-grained graph structures before training

Model Inversion Attack visionfederated-learninggraph
PDF Code
attack arXiv Jan 7, 2025 · Jan 2025

Rethinking Adversarial Attacks in Reinforcement Learning from Policy Distribution Perspective

Tianyang Duan, Zongyuan Zhang, Zheng Lin et al. · The University of Hong Kong · Fudan University +3 more

Novel PGD-variant attacks DRL policy distributions via Bhattacharyya distance, outperforming action-level baselines by 22% reward drop

Input Manipulation Attack reinforcement-learning
PDF