defense 2025

TraceHiding: Scalable Machine Unlearning for Mobility Data

Ali Faraji , Manos Papagelis

0 citations

α

Published on arXiv

2509.17241

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

TraceHiding achieves up to 40x speedup over retraining with competitive membership inference attack resilience across all tested architectures on mobility trajectory unlearning tasks

TraceHiding

Novel technique introduced


This work introduces TraceHiding, a scalable, importance-aware machine unlearning framework for mobility trajectory data. Motivated by privacy regulations such as GDPR and CCPA granting users "the right to be forgotten," TraceHiding removes specified user trajectories from trained deep models without full retraining. It combines a hierarchical data-driven importance scoring scheme with teacher-student distillation. Importance scores--computed at token, trajectory, and user levels from statistical properties (coverage diversity, entropy, length)--quantify each training sample's impact, enabling targeted forgetting of high-impact data while preserving common patterns. The student model retains knowledge on remaining data and unlearns targeted trajectories through an importance-weighted loss that amplifies forgetting signals for unique samples and attenuates them for frequent ones. We validate on Trajectory--User Linking (TUL) tasks across three real-world higher-order mobility datasets (HO-Rome, HO-Geolife, HO-NYC) and multiple architectures (GRU, LSTM, BERT, ModernBERT, GCN-TULHOR), against strong unlearning baselines including SCRUB, NegGrad, NegGrad+, Bad-T, and Finetuning. Experiments under uniform and targeted user deletion show TraceHiding, especially its entropy-based variant, achieves superior unlearning accuracy, competitive membership inference attack (MIA) resilience, and up to 40\times speedup over retraining with minimal test accuracy loss. Results highlight robustness to adversarial deletion of high-information users and consistent performance across models. To our knowledge, this is the first systematic study of machine unlearning for trajectory data, providing a reproducible pipeline with public code and preprocessing tools.


Key Contributions

  • First systematic machine unlearning study for mobility trajectory data, evaluated across GRU, LSTM, BERT, ModernBERT, and GCN architectures on three real-world datasets
  • Hierarchical importance scoring scheme at token, trajectory, and user levels (coverage diversity, entropy, length) to weight forgetting signals by data uniqueness
  • Importance-weighted teacher-student distillation achieving up to 40x speedup over full retraining with competitive MIA resilience and minimal utility loss

🛡️ Threat Analysis

Membership Inference Attack

The paper explicitly evaluates unlearning quality via membership inference attack (MIA) resilience as a primary metric — demonstrating whether deleted trajectories can still be inferred from the model — satisfying the machine unlearning exception for ML04 relevance.


Details

Domains
timeseriesnlpgraph
Model Types
transformerrnngnn
Threat Tags
training_time
Datasets
HO-RomeHO-GeolifeHO-NYC
Applications
mobility analyticstrajectory-user linkinglocation privacy