defense 2025

FedPoP: Federated Learning Meets Proof of Participation

Devriş İşler 1,2, Elina van Kempen 3, Seoyeon Hwang 4, Nikolaos Laoutaris 1

0 citations · 37 references · arXiv

α

Published on arXiv

2511.08207

Model Theft

OWASP ML Top 10 — ML05

Key Finding

FedPoP adds only 0.97 seconds per-round overhead atop secure FL aggregation and allows third-party participation verification in 0.0612 seconds, making it practical for real-world deployments.

FedPoP

Novel technique introduced


Federated learning (FL) offers privacy preserving, distributed machine learning, allowing clients to contribute to a global model without revealing their local data. As models increasingly serve as monetizable digital assets, the ability to prove participation in their training becomes essential for establishing ownership. In this paper, we address this emerging need by introducing FedPoP, a novel FL framework that allows nonlinkable proof of participation while preserving client anonymity and privacy without requiring either extensive computations or a public ledger. FedPoP is designed to seamlessly integrate with existing secure aggregation protocols to ensure compatibility with real-world FL deployments. We provide a proof of concept implementation and an empirical evaluation under realistic client dropouts. In our prototype, FedPoP introduces 0.97 seconds of per-round overhead atop securely aggregated FL and enables a client to prove its participation/contribution to a model held by a third party in 0.0612 seconds. These results indicate FedPoP is practical for real-world deployments that require auditable participation without sacrificing privacy.


Key Contributions

  • FedPoP framework providing non-linkable, anonymous proof-of-participation in FL without a public ledger or heavy cryptographic computation
  • Seamless integration with existing secure aggregation protocols, adding only 0.97 seconds of per-round overhead
  • Proof-of-concept implementation enabling a third party to verify a client's participation claim in 0.0612 seconds under realistic client dropout conditions

🛡️ Threat Analysis

Model Theft

FedPoP enables FL clients to cryptographically prove ownership and participation rights in a trained model — the functional equivalent of model ownership watermarking but via zero-knowledge-style proofs — directly defending against unauthorized model monetization and IP misappropriation by service providers.


Details

Domains
federated-learning
Model Types
federated
Threat Tags
training_time
Applications
federated learningmodel ip protectionownership verification