survey 2025

A Survey: Towards Privacy and Security in Mobile Large Language Models

Honghui Xu 1, Kaiyang Li 2, Wei Chen 3, Danyang Zheng 4, Zhiyuan Li 3, Zhipeng Cai 2

0 citations

α

Published on arXiv

2509.02411

Input Manipulation Attack

OWASP ML Top 10 — ML01

Membership Inference Attack

OWASP ML Top 10 — ML04

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Mobile LLMs face unique security-efficiency tradeoffs not present in server-side deployments, with adversarial attacks, membership inference, and side-channel leakage representing the most critical unsolved threats in resource-constrained environments.


Mobile Large Language Models (LLMs) are revolutionizing diverse fields such as healthcare, finance, and education with their ability to perform advanced natural language processing tasks on-the-go. However, the deployment of these models in mobile and edge environments introduces significant challenges related to privacy and security due to their resource-intensive nature and the sensitivity of the data they process. This survey provides a comprehensive overview of privacy and security issues associated with mobile LLMs, systematically categorizing existing solutions such as differential privacy, federated learning, and prompt encryption. Furthermore, we analyze vulnerabilities unique to mobile LLMs, including adversarial attacks, membership inference, and side-channel attacks, offering an in-depth comparison of their effectiveness and limitations. Despite recent advancements, mobile LLMs face unique hurdles in achieving robust security while maintaining efficiency in resource-constrained environments. To bridge this gap, we propose potential applications, discuss open challenges, and suggest future research directions, paving the way for the development of trustworthy, privacy-compliant, and scalable mobile LLM systems.


Key Contributions

  • Comprehensive taxonomy of privacy and security threats specific to mobile/edge LLM deployments (adversarial attacks, membership inference, side-channel attacks)
  • Systematic categorization of mitigation strategies including differential privacy, federated learning, and prompt encryption with effectiveness comparisons
  • Identification of open challenges and future research directions for secure, efficient mobile LLM deployment

🛡️ Threat Analysis

Input Manipulation Attack

Adversarial attacks against mobile LLMs are explicitly identified as a primary vulnerability category and systematically analyzed throughout the survey.

Membership Inference Attack

Membership inference attacks are explicitly listed as a key vulnerability analyzed in depth, with comparison of effectiveness and limitations of defenses.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timetraining_timeblack_box
Applications
mobile ai assistantson-device nlphealthcare mobile aifinancial mobile aiedge inference