defense 2026

Stop Tracking Me! Proactive Defense Against Attribute Inference Attack in LLMs

Dong Yan 1,2, Jian Liang 1,2, Ran He 1,2, Tieniu Tan 1,2,3

1 citations · 48 references · arXiv (Cornell University)

α

Published on arXiv

2602.11528

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

TRACE-RPS reduces attribute inference accuracy from ~50% to below 5% on open-source LLMs including Llama2, Llama3, DeepSeek-R1, and Qwen2.5

TRACE-RPS

Novel technique introduced


Recent studies have shown that large language models (LLMs) can infer private user attributes (e.g., age, location, gender) from user-generated text shared online, enabling rapid and large-scale privacy breaches. Existing anonymization-based defenses are coarse-grained, lacking word-level precision in anonymizing privacy-leaking elements. Moreover, they are inherently limited as altering user text to hide sensitive cues still allows attribute inference to occur through models' reasoning capabilities. To address these limitations, we propose a unified defense framework that combines fine-grained anonymization (TRACE) with inference-preventing optimization (RPS). TRACE leverages attention mechanisms and inference chain generation to identify and anonymize privacy-leaking textual elements, while RPS employs a lightweight two-stage optimization strategy to induce model rejection behaviors, thereby preventing attribute inference. Evaluations across diverse LLMs show that TRACE-RPS reduces attribute inference accuracy from around 50\% to below 5\% on open-source models. In addition, our approach offers strong cross-model generalization, prompt-variation robustness, and utility-privacy tradeoffs. Our code is available at https://github.com/Jasper-Yan/TRACE-RPS.


Key Contributions

  • TRACE: attention-based fine-grained anonymization that identifies and redacts specific privacy-leaking textual elements using inference chain generation
  • RPS: first optimization-based defense for attribute inference, using two-stage adversarial suffix search to induce LLM rejection behaviors without altering original user text
  • MPS: alternative misattribution strategy that redirects predictions to incorrect attributes for highly instruction-following closed-source models

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
white_boxblack_boxinference_time
Datasets
Reddit user posts (from Staab et al. 2023 attribute inference benchmark)
Applications
user attribute privacytext anonymizationllm privacy protection