survey arXiv Sep 2, 2025 · Sep 2025
Honghui Xu, Kaiyang Li, Wei Chen et al. · Kennesaw State University · Georgia State University +2 more
Surveys privacy and security threats to mobile LLMs: adversarial attacks, membership inference, side-channel leakage, and defenses
Input Manipulation Attack Membership Inference Attack Prompt Injection Sensitive Information Disclosure nlp
Mobile Large Language Models (LLMs) are revolutionizing diverse fields such as healthcare, finance, and education with their ability to perform advanced natural language processing tasks on-the-go. However, the deployment of these models in mobile and edge environments introduces significant challenges related to privacy and security due to their resource-intensive nature and the sensitivity of the data they process. This survey provides a comprehensive overview of privacy and security issues associated with mobile LLMs, systematically categorizing existing solutions such as differential privacy, federated learning, and prompt encryption. Furthermore, we analyze vulnerabilities unique to mobile LLMs, including adversarial attacks, membership inference, and side-channel attacks, offering an in-depth comparison of their effectiveness and limitations. Despite recent advancements, mobile LLMs face unique hurdles in achieving robust security while maintaining efficiency in resource-constrained environments. To bridge this gap, we propose potential applications, discuss open challenges, and suggest future research directions, paving the way for the development of trustworthy, privacy-compliant, and scalable mobile LLM systems.
llm transformer Kennesaw State University · Georgia State University · Nexa AI +1 more
defense arXiv Sep 11, 2025 · Sep 2025
Honghui Xu, Shiva Shrestha, Wei Chen et al. · Kennesaw State University · Nexa AI +1 more
Defends federated LLM fine-tuning against membership inference attacks via LoRA with differential privacy noise injection
Membership Inference Attack nlpfederated-learning
As on-device large language model (LLM) systems become increasingly prevalent, federated fine-tuning enables advanced language understanding and generation directly on edge devices; however, it also involves processing sensitive, user-specific data, raising significant privacy concerns within the federated learning framework. To address these challenges, we propose DP-FedLoRA, a privacy-enhanced federated fine-tuning framework that integrates LoRA-based adaptation with differential privacy in a communication-efficient setting. Each client locally clips and perturbs its LoRA matrices using Gaussian noise to satisfy ($ε$, $δ$)-differential privacy. We further provide a theoretical analysis demonstrating the unbiased nature of the updates and deriving bounds on the variance introduced by noise, offering practical guidance for privacy-budget calibration. Experimental results across mainstream benchmarks show that DP-FedLoRA delivers competitive performance while offering strong privacy guarantees, paving the way for scalable and privacy-preserving LLM deployment in on-device environments.
llm federated Kennesaw State University · Nexa AI · Georgia State University