Latest papers

7 papers
defense arXiv Mar 4, 2026 · 4w ago

From Spark to Fire: Modeling and Mitigating Error Cascades in LLM-Based Multi-Agent Collaboration

Yizhe Xie, Congcong Zhu, Xinyue Zhang et al. · City University of Macau · Minzu University of China

Models and defends against injected error-seed cascades in LLM multi-agent systems via genealogy-graph message governance

Prompt Injection Excessive Agency nlp
PDF Code
defense arXiv Mar 3, 2026 · 4w ago

RAIN: Secure and Robust Aggregation under Shuffle Model of Differential Privacy

Yuhang Li, Yajie Wang, Xiangyun Tang et al. · Beijing Institute of Technology · Minzu University of China

Defends federated learning against Byzantine poisoning and shuffler tampering under Shuffle-DP with verifiable secret-shared aggregation

Data Poisoning Attack federated-learning
PDF
defense arXiv Feb 5, 2026 · 8w ago

ALIEN: Analytic Latent Watermarking for Controllable Generation

Liangqi Lei, Keke Gai, Jing Yu et al. · Beijing Institute of Technology · Minzu University of China +1 more

Embeds analytically-derived watermarks in diffusion model latents for content provenance with improved quality and attack robustness

Output Integrity Attack visiongenerative
PDF Code
benchmark arXiv Oct 21, 2025 · Oct 2025

The Trust Paradox in LLM-Based Multi-Agent Systems: When Collaboration Becomes a Security Vulnerability

Zijie Xu, Minfeng Qi, Shiqing Wu et al. · Minzu University of China · City University of Macau +1 more

Empirically validates that higher inter-agent trust in LLM multi-agent systems increases sensitive data over-exposure and authorization boundary violations

Excessive Agency Sensitive Information Disclosure nlp
2 citations PDF
attack arXiv Sep 26, 2025 · Sep 2025

Non-Linear Trajectory Modeling for Multi-Step Gradient Inversion Attacks in Federated Learning

Li Xia, Jing Yu, Zheng Liu et al. · Minzu University of China · Beijing University of Posts and Telecommunications

Proposes NL-SME, a gradient inversion attack using Bézier curve trajectory modeling to reconstruct FL training data more accurately than linear methods

Model Inversion Attack federated-learningvision
2 citations PDF Code
defense arXiv Aug 14, 2025 · Aug 2025

A Vision-Language Pre-training Model-Guided Approach for Mitigating Backdoor Attacks in Federated Learning

Keke Gai, Dongjue Wang, Jing Yu et al. · Beijing Institute of Technology · Minzu University of China +1 more

Defends federated learning backdoors under Non-IID data using CLIP zero-shot alignment to eliminate trigger-label correlations

Model Poisoning visionfederated-learningmultimodal
PDF Code
attack KSEM Aug 9, 2025 · Aug 2025

Label Inference Attacks against Federated Unlearning

Wei Wang, Xiangyun Tang, Yajie Wang et al. · Minzu University of China · Beijing Institute of Technology +3 more

Attacks federated unlearning systems by inferring private data labels from model parameter variations using gradient-label mapping

Model Inversion Attack federated-learning
PDF