benchmark 2026

Leveraging Membership Inference Attacks for Privacy Measurement in Federated Learning for Remote Sensing Images

Anh-Kiet Duong 1, Petra Gomez-Krämer 1, Hoàng-Ân Lê 2, Minh-Tan Pham 2

0 citations · 17 references · arXiv

α

Published on arXiv

2601.06200

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Communication-efficient FL strategies reduce membership inference attack success rates while maintaining competitive classification accuracy on remote sensing datasets, confirming MIA as a practical privacy metric for FL system design.


Federated Learning (FL) enables collaborative model training while keeping training data localized, allowing us to preserve privacy in various domains including remote sensing. However, recent studies show that FL models may still leak sensitive information through their outputs, motivating the need for rigorous privacy evaluation. In this paper, we leverage membership inference attacks (MIA) as a quantitative privacy measurement framework for FL applied to remote sensing image classification. We evaluate multiple black-box MIA techniques, including entropy-based attacks, modified entropy attacks, and the likelihood ratio attack, across different FL algorithms and communication strategies. Experiments conducted on two public scene classification datasets demonstrate that MIA effectively reveals privacy leakage not captured by accuracy alone. Our results show that communication-efficient FL strategies reduce MIA success rates while maintaining competitive performance. These findings confirm MIA as a practical metric and highlight the importance of integrating privacy measurement into FL system design for remote sensing applications.


Key Contributions

  • Establishes MIA as a quantitative privacy measurement framework for evaluating federated learning systems applied to remote sensing image classification
  • Evaluates multiple black-box MIA techniques (entropy-based, modified entropy, likelihood ratio attack) across different FL algorithms and communication strategies
  • Shows that communication-efficient FL strategies reduce MIA success rates while maintaining competitive classification performance

🛡️ Threat Analysis

Membership Inference Attack

The paper's primary contribution is using membership inference attacks (entropy-based, modified entropy, likelihood ratio) to quantitatively measure privacy leakage in federated learning models — a direct instantiation of the ML04 threat where an adversary determines whether specific data points were in the training set.


Details

Domains
visionfederated-learning
Model Types
cnnfederated
Threat Tags
black_boxinference_time
Datasets
remote sensing scene classification datasets (two public benchmarks, unspecified in available text)
Applications
remote sensing image classificationfederated learning privacy evaluation