attack 2025

Exposing LLM User Privacy via Traffic Fingerprint Analysis: A Study of Privacy Risks in LLM Agent Interactions

Yixiang Zhang 1, Xinhao Deng 1,2, Zhongyi Gu 1, Yihao Chen 1, Ke Xu 1, Qi Li 1, Jianping Wu 1

2 citations · 84 references · arXiv

α

Published on arXiv

2510.07176

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

AgentPrint achieves F1-score of 0.866 for agent identification and 73.9%/69.1% top-3 accuracy for user attribute inference in simulated/real-user settings from encrypted LLM agent traffic.

AgentPrint

Novel technique introduced


Large Language Models (LLMs) are increasingly deployed as agents that orchestrate tasks and integrate external tools to execute complex workflows. We demonstrate that these interactive behaviors leave distinctive fingerprints in encrypted traffic exchanged between users and LLM agents. By analyzing traffic patterns associated with agent workflows and tool invocations, adversaries can infer agent activities, distinguish specific agents, and even profile sensitive user attributes. To highlight this risk, we develop AgentPrint, which achieves an F1-score of 0.866 in agent identification and attains 73.9% and 69.1% top-3 accuracy in user attribute inference for simulated- and real-user settings, respectively. These results uncover an overlooked risk: the very interactivity that empowers LLM agents also exposes user privacy, underscoring the urgent need for technical countermeasures alongside regulatory and policy safeguards.


Key Contributions

  • Discovery that LLM agent workflows (multi-step tool invocations) produce distinctive encrypted traffic fingerprints exploitable for agent identification and user profiling
  • AgentPrint system achieving F1=0.866 for agent identification and 73.9% top-3 accuracy for user attribute inference from traffic analysis
  • Empirical demonstration of the privacy risk in both simulated and real-user settings, motivating countermeasures for LLM agent deployments

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
llm agent systemsuser attribute privacy