attack 2026

Res-MIA: A Training-Free Resolution-Based Membership Inference Attack on Federated Learning Models

Mohammad Zare , Pirooz Shamsinejadbabaki

0 citations · 18 references · arXiv

α

Published on arXiv

2601.17378

Membership Inference Attack

OWASP ML Top 10 — ML04

Key Finding

Achieves AUC of up to 0.88 on a federated ResNet-18 trained on CIFAR-10, outperforming training-free baselines with no shadow models or auxiliary data required.

Res-MIA

Novel technique introduced


Membership inference attacks (MIAs) pose a serious threat to the privacy of machine learning models by allowing adversaries to determine whether a specific data sample was included in the training set. Although federated learning (FL) is widely regarded as a privacy-aware training paradigm due to its decentralized nature, recent evidence shows that the final global model can still leak sensitive membership information through black-box access. In this paper, we introduce Res-MIA, a novel training-free and black-box membership inference attack that exploits the sensitivity of deep models to high-frequency input details. Res-MIA progressively degrades the input resolution using controlled downsampling and restoration operations, and analyzes the resulting confidence decay in the model's predictions. Our key insight is that training samples exhibit a significantly steeper confidence decline under resolution erosion compared to non-member samples, revealing a robust membership signal. Res-MIA requires no shadow models, no auxiliary data, and only a limited number of forward queries to the target model. We evaluate the proposed attack on a federated ResNet-18 trained on CIFAR-10, where it consistently outperforms existing training-free baselines and achieves an AUC of up to 0.88 with minimal computational overhead. These findings highlight frequency-sensitive overfitting as an important and previously underexplored source of privacy leakage in federated learning, and emphasize the need for privacy-aware model designs that reduce reliance on fine-grained, non-robust input features.


Key Contributions

  • Resolution-based membership signal: training samples exhibit significantly steeper confidence decay under progressive input downsampling/restoration than non-members, revealing a novel frequency-sensitive overfitting signal.
  • Training-free, shadow-free MIA pipeline requiring no auxiliary data and only a small number of forward queries to the black-box target model.
  • Empirical demonstration on federated ResNet-18/CIFAR-10 achieving AUC up to 0.88, outperforming existing training-free MIA baselines.

🛡️ Threat Analysis

Membership Inference Attack

Res-MIA is a membership inference attack that determines whether a specific data sample was included in a federated learning model's training set — the textbook ML04 threat. The paper's entire contribution is the attack methodology and its evaluation.


Details

Domains
visionfederated-learning
Model Types
cnnfederated
Threat Tags
black_boxinference_time
Datasets
CIFAR-10
Applications
federated learningimage classification