defense 2026

PRISM-FCP: Byzantine-Resilient Federated Conformal Prediction via Partial Sharing

Ehsan Lari 1, Reza Arablouei 2, Stefan Werner 1,3

0 citations · 46 references · arXiv (Cornell University)

α

Published on arXiv

2602.18396

Data Poisoning Attack

OWASP ML Top 10 — ML02

Key Finding

PRISM-FCP maintains nominal coverage guarantees under Byzantine attacks while avoiding the interval inflation seen in standard FCP with reduced communication overhead.

PRISM-FCP

Novel technique introduced


We propose PRISM-FCP (Partial shaRing and robust calIbration with Statistical Margins for Federated Conformal Prediction), a Byzantine-resilient federated conformal prediction framework that utilizes partial model sharing to improve robustness against Byzantine attacks during both model training and conformal calibration. Existing approaches address adversarial behavior only in the calibration stage, leaving the learned model susceptible to poisoned updates. In contrast, PRISM-FCP mitigates attacks end-to-end. During training, clients partially share updates by transmitting only $M$ of $D$ parameters per round. This attenuates the expected energy of an adversary's perturbation in the aggregated update by a factor of $M/D$, yielding lower mean-square error (MSE) and tighter prediction intervals. During calibration, clients convert nonconformity scores into characterization vectors, compute distance-based maliciousness scores, and downweight or filter suspected Byzantine contributions before estimating the conformal quantile. Extensive experiments on both synthetic data and the UCI Superconductivity dataset demonstrate that PRISM-FCP maintains nominal coverage guarantees under Byzantine attacks while avoiding the interval inflation observed in standard FCP with reduced communication, providing a robust and communication-efficient approach to federated uncertainty quantification.


Key Contributions

  • Partial parameter sharing (transmitting M of D parameters per round) that attenuates adversarial perturbation energy in aggregated updates by a factor of M/D, yielding tighter prediction intervals
  • Byzantine-resilient conformal calibration via characterization vectors and distance-based maliciousness scoring to downweight/filter suspected Byzantine nonconformity score contributions
  • End-to-end Byzantine resilience covering both training and calibration stages, unlike prior FCP approaches that only address calibration-phase attacks

🛡️ Threat Analysis

Data Poisoning Attack

The primary threat model is Byzantine clients sending arbitrary/malicious model updates during federated learning training (and calibration), which is the canonical ML02 scenario. PRISM-FCP defends with partial parameter sharing (attenuating adversarial perturbation energy by M/D) and robust aggregation via distance-based maliciousness scoring — both are Byzantine-fault-tolerant FL protocol contributions squarely within ML02's defense scope.


Details

Domains
federated-learning
Model Types
federatedtraditional_ml
Threat Tags
training_timeblack_boxuntargeted
Datasets
UCI Superconductivitysynthetic data
Applications
federated learninguncertainty quantificationregression