benchmark 2026

Multi-Speaker Conversational Audio Deepfake: Taxonomy, Dataset and Pilot Study

Alabi Ahmed , Vandana Janeja , Sanjay Purushotham

0 citations · 44 references · ICDMW

α

Published on arXiv

2602.00295

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Existing baseline models (LFCC-LCNN, RawNet2, Wav2Vec 2.0) trained for single-speaker deepfake detection show significant performance gaps when applied to multi-speaker conversational audio deepfakes, highlighting a major underexplored threat.

MsCADD

Novel technique introduced


The rapid advances in text-to-speech (TTS) technologies have made audio deepfakes increasingly realistic and accessible, raising significant security and trust concerns. While existing research has largely focused on detecting single-speaker audio deepfakes, real-world malicious applications with multi-speaker conversational settings is also emerging as a major underexplored threat. To address this gap, we propose a conceptual taxonomy of multi-speaker conversational audio deepfakes, distinguishing between partial manipulations (one or multiple speakers altered) and full manipulations (entire conversations synthesized). As a first step, we introduce a new Multi-speaker Conversational Audio Deepfakes Dataset (MsCADD) of 2,830 audio clips containing real and fully synthetic two-speaker conversations, generated using VITS and SoundStorm-based NotebookLM models to simulate natural dialogue with variations in speaker gender, and conversational spontaneity. MsCADD is limited to text-to-speech (TTS) types of deepfake. We benchmark three neural baseline models; LFCC-LCNN, RawNet2, and Wav2Vec 2.0 on this dataset and report performance in terms of F1 score, accuracy, true positive rate (TPR), and true negative rate (TNR). Results show that these baseline models provided a useful benchmark, however, the results also highlight that there is a significant gap in multi-speaker deepfake research in reliably detecting synthetic voices under varied conversational dynamics. Our dataset and benchmarks provide a foundation for future research on deepfake detection in conversational scenarios, which is a highly underexplored area of research but also a major area of threat to trustworthy information in audio settings. The MsCADD dataset is publicly available to support reproducibility and benchmarking by the research community.


Key Contributions

  • Conceptual taxonomy of multi-speaker conversational audio deepfakes distinguishing partial from full manipulations
  • MsCADD: a new dataset of 2,830 real and fully synthetic two-speaker conversational audio clips generated using VITS and SoundStorm/NotebookLM
  • Baseline benchmarking of LFCC-LCNN, RawNet2, and Wav2Vec 2.0 revealing significant performance gaps in multi-speaker conversational deepfake detection

🛡️ Threat Analysis

Output Integrity Attack

Paper directly addresses detection of AI-generated audio content (audio deepfakes) — a core output integrity concern. Introduces a dataset and taxonomy for a new sub-domain of deepfake detection (multi-speaker conversational), and benchmarks existing detection models to reveal significant performance gaps.


Details

Domains
audio
Model Types
cnntransformer
Threat Tags
inference_time
Datasets
MsCADDASVspoofADD 2022/2023
Applications
audio deepfake detectionmulti-speaker conversation authentication