attack arXiv Oct 15, 2025 · Oct 2025
Hamdan Al-Ali, Ali Reza Ghavamipour, Tommaso Caselli et al. · Mohamed bin Zayed University of Artificial Intelligence · Maastricht University +2 more
Infers private personal attributes from federated ASR model weight differentials using shadow models and centroid classification
Model Inversion Attack audiofederated-learning
Federated learning is a common method for privacy-preserving training of machine learning models. In this paper, we analyze the vulnerability of ASR models to attribute inference attacks in the federated setting. We test a non-parametric white-box attack method under a passive threat model on three ASR models: Wav2Vec2, HuBERT, and Whisper. The attack operates solely on weight differentials without access to raw speech from target speakers. We demonstrate attack feasibility on sensitive demographic and clinical attributes: gender, age, accent, emotion, and dysarthria. Our findings indicate that attributes that are underrepresented or absent in the pre-training data are more vulnerable to such inference attacks. In particular, information about accents can be reliably inferred from all models. Our findings expose previously undocumented vulnerabilities in federated ASR models and offer insights towards improved security.
transformer federated Mohamed bin Zayed University of Artificial Intelligence · Maastricht University · University of Groningen +1 more