survey 2026

NLP Privacy Risk Identification in Social Media (NLP-PRISM): A Survey

Dhiman Goswami , Jai Kruthunz Naveen Kumar , Sanchari Das

0 citations · 210 references · SSRN

α

Published on arXiv

2602.15866

Membership Inference Attack

OWASP ML Top 10 — ML04

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

MIA AUC reaches 0.81 and AIA accuracy 0.75 on social media NLP models, while privacy-preserving fine-tuning degrades F1 by 1–23%, with identity-centric tasks showing the highest vulnerability.

NLP-PRISM

Novel technique introduced


Natural Language Processing (NLP) is integral to social media analytics but often processes content containing Personally Identifiable Information (PII), behavioral cues, and metadata raising privacy risks such as surveillance, profiling, and targeted advertising. To systematically assess these risks, we review 203 peer-reviewed papers and propose the NLP Privacy Risk Identification in Social Media (NLP-PRISM) framework, which evaluates vulnerabilities across six dimensions: data collection, preprocessing, visibility, fairness, computational risk, and regulatory compliance. Our analysis shows that transformer models achieve F1-scores ranging from 0.58-0.84, but incur a 1% - 23% drop under privacy-preserving fine-tuning. Using NLP-PRISM, we examine privacy coverage in six NLP tasks: sentiment analysis (16), emotion detection (14), offensive language identification (19), code-mixed processing (39), native language identification (29), and dialect detection (24) revealing substantial gaps in privacy research. We further found a (reduced by 2% - 9%) trade-off in model utility, MIA AUC (membership inference attacks) 0.81, AIA accuracy 0.75 (attribute inference attacks). Finally, we advocate for stronger anonymization, privacy-aware learning, and fairness-driven training to enable ethical NLP in social media contexts.


Key Contributions

  • NLP-PRISM: a six-dimensional framework for systematically characterizing privacy risks (data collection, preprocessing, visibility, fairness, computational risk, regulatory compliance) across social media NLP tasks
  • Quantitative evaluation of MIA (AUC up to 0.81) and AIA (accuracy up to 0.75) on transformer models across six NLP tasks, revealing identity-centric tasks as highest risk
  • Systematic review of 203 papers exposing gaps in privacy coverage and measuring 1–23% F1 drop under privacy-preserving fine-tuning

🛡️ Threat Analysis

Model Inversion Attack

Evaluates attribute inference attacks (AIA accuracy up to 0.75), where adversaries infer sensitive user attributes (demographics, identity) from model representations — a model inversion / private attribute reconstruction threat.

Membership Inference Attack

Explicitly evaluates membership inference attacks (MIA) against transformer models (XLM-R, GPT-2, FLAN-T5), reporting MIA AUC up to 0.81 across six NLP tasks applied to social media.


Details

Domains
nlp
Model Types
transformerllm
Threat Tags
black_boxinference_time
Datasets
SemEval sentiment datasetsWASSA emotion datasetsSemEval offensive language datasetsVarDial dialect datasetsNLI shared task datasets
Applications
sentiment analysisemotion detectionoffensive language identificationdialect detectionnative language identificationcode-mixed processingsocial media analytics