defense 2025

Privacy on the Fly: A Predictive Adversarial Transformation Network for Mobile Sensor Data

Tianle Song 1, Chenhao Lin 1, Yang Cao 2, Zhengyu Zhao 1, Jiahao Sun 1, Chong Zhang 1, Le Yang 1, Chao Shen 1

0 citations · 30 references · arXiv

α

Published on arXiv

2511.07242

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

PATN reduces privacy inference accuracy to near-random (ASR 40–45%) and raises Equal Error Rate from ~8% to ~44%, outperforming all baselines on two datasets

PATN (Predictive Adversarial Transformation Network)

Novel technique introduced


Mobile motion sensors such as accelerometers and gyroscopes are now ubiquitously accessible by third-party apps via standard APIs. While enabling rich functionalities like activity recognition and step counting, this openness has also enabled unregulated inference of sensitive user traits, such as gender, age, and even identity, without user consent. Existing privacy-preserving techniques, such as GAN-based obfuscation or differential privacy, typically require access to the full input sequence, introducing latency that is incompatible with real-time scenarios. Worse, they tend to distort temporal and semantic patterns, degrading the utility of the data for benign tasks like activity recognition. To address these limitations, we propose the Predictive Adversarial Transformation Network (PATN), a real-time privacy-preserving framework that leverages historical signals to generate adversarial perturbations proactively. The perturbations are applied immediately upon data acquisition, enabling continuous protection without disrupting application functionality. Experiments on two datasets demonstrate that PATN substantially degrades the performance of privacy inference models, achieving Attack Success Rate (ASR) of 40.11% and 44.65% (reducing inference accuracy to near-random) and increasing the Equal Error Rate (EER) from 8.30% and 7.56% to 41.65% and 46.22%. On ASR, PATN outperforms baseline methods by 16.16% and 31.96%, respectively.


Key Contributions

  • Predictive adversarial perturbation generation using historical sensor context, enabling real-time protection without buffering full sequences
  • Preserves utility for benign tasks (activity recognition) while degrading privacy inference models to near-random accuracy
  • Outperforms GAN-based and differential privacy baselines by 16.16% and 31.96% in attack success rate on two sensor datasets

🛡️ Threat Analysis

Input Manipulation Attack

PATN generates adversarial perturbations that cause ML classifiers (privacy inference models for gender, age, identity) to misclassify at inference time. The core contribution is a novel predictive adversarial perturbation generation method — the same technical space as adversarial example crafting, applied here as a user-side defense. An ML security researcher would learn a new real-time perturbation generation technique from this paper.


Details

Domains
timeseries
Model Types
cnnrnn
Threat Tags
inference_timedigital
Datasets
MotionSense
Applications
mobile sensor privacyactivity recognitionattribute inference protection (gender, age, identity)