defense 2025

Adversary-Aware Private Inference over Wireless Channels

Mohamed Seif 1, Malcolm Egan 2, Andrea J. Goldsmith 3, H. Vincent Poor 1

0 citations · 25 references · arXiv

α

Published on arXiv

2510.20518

Model Inversion Attack

OWASP ML Top 10 — ML03

Key Finding

Provides provable lower bounds on adversarial reconstruction error for ML feature embeddings transmitted over wireless channels, integrating differential privacy with channel-aware encoding.


AI-based sensing at wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications, particularly for vision and perception tasks such as in autonomous driving and environmental monitoring. AI systems rely both on efficient model learning and inference. In the inference phase, features extracted from sensing data are utilized for prediction tasks (e.g., classification or regression). In edge networks, sensors and model servers are often not co-located, which requires communication of features. As sensitive personal data can be reconstructed by an adversary, transformation of the features are required to reduce the risk of privacy violations. While differential privacy mechanisms provide a means of protecting finite datasets, protection of individual features has not been addressed. In this paper, we propose a novel framework for privacy-preserving AI-based sensing, where devices apply transformations of extracted features before transmission to a model server.


Key Contributions

  • End-to-end private collaborative inference pipeline combining dimensionality reduction, controlled perturbation, and adaptive channel-aware feature encoding
  • Rigorous theoretical guarantees on adversarial reconstruction error for ML features transmitted over wireless channels
  • Analysis showing how wireless channel noise itself provides inherent privacy benefits that can be exploited in mechanism design

🛡️ Threat Analysis

Model Inversion Attack

The adversary's goal is to reconstruct private sensing data (e.g., images of individuals) from intercepted ML feature embeddings — a concrete embedding/feature inversion attack. The paper defends against this by introducing calibrated perturbations and dimensionality reduction, with provable lower bounds on adversarial reconstruction error.


Details

Domains
vision
Model Types
cnn
Threat Tags
inference_timegrey_box
Applications
autonomous drivingenvironmental monitoringedge ai inferencecollaborative inference