defense 2026

Disentangling Speaker Traits for Deepfake Source Verification via Chebyshev Polynomial and Riemannian Metric Learning

Xi Xuan 1,2, Wenxin Zhang 3, Zhiyu Li 4, Jennifer Williams 5, Ville Hautamäki 1, Tomi H. Kinnunen 1

0 citations

α

Published on arXiv

2603.21875

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Demonstrates that speaker traits significantly entangle with source embeddings (speaker verification model achieves 29.42% EER on source verification task), and SDML framework effectively disentangles them

SDML (Speaker-Disentangled Metric Learning)

Novel technique introduced


Speech deepfake source verification systems aims to determine whether two synthetic speech utterances originate from the same source generator, often assuming that the resulting source embeddings are independent of speaker traits. However, this assumption remains unverified. In this paper, we first investigate the impact of speaker factors on source verification. We propose a speaker-disentangled metric learning (SDML) framework incorporating two novel loss functions. The first leverages Chebyshev polynomial to mitigate gradient instability during disentanglement optimization. The second projects source and speaker embeddings into hyperbolic space, leveraging Riemannian metric distances to reduce speaker information and learn more discriminative source features. Experimental results on MLAAD benchmark, evaluated under four newly proposed protocols designed for source-speaker disentanglement scenarios, demonstrate the effectiveness of SDML framework. The code, evaluation protocols and demo website are available at https://github.com/xxuan-acoustics/RiemannSD-Net.


Key Contributions

  • Speaker-disentangled metric learning (SDML) framework using Chebyshev polynomial-based loss to stabilize disentanglement optimization
  • Hyperbolic space projection with Riemannian metric distances (HAM-Softmax) to separate speaker and source embeddings
  • Four new evaluation protocols for source-speaker disentanglement scenarios on MLAAD benchmark

🛡️ Threat Analysis

Output Integrity Attack

The paper addresses verifying the source (which generator) of synthetic speech outputs. Source verification is fundamentally about authenticating AI-generated content provenance — determining which model/generator produced a given synthetic audio sample. This is output integrity verification, not detecting whether audio is synthetic (which would also be ML09 but a different flavor). The paper builds a system to trace deepfake audio back to its source generator, which is content provenance tracking.


Details

Domains
audiogenerative
Model Types
transformer
Threat Tags
inference_time
Datasets
MLAAD
Applications
deepfake audio detectionsource attributionsynthetic speech verification