Mixture of Low-Rank Adapter Experts in Generalizable Audio Deepfake Detection
Janne Laakkonen 1, Ivan Kukanov 2, Ville Hautamäki 1
Published on arXiv
2509.13878
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
MoE-LoRA reduces average out-of-domain equal error rate from 8.55% to 6.08% compared to standard fine-tuning baseline on Wav2Vec2+AASIST.
MoE-LoRA
Novel technique introduced
Foundation models such as Wav2Vec2 excel at representation learning in speech tasks, including audio deepfake detection. However, after being fine-tuned on a fixed set of bonafide and spoofed audio clips, they often fail to generalize to novel deepfake methods not represented in training. To address this, we propose a mixture-of-LoRA-experts approach that integrates multiple low-rank adapters (LoRA) into the model's attention layers. A routing mechanism selectively activates specialized experts, enhancing adaptability to evolving deepfake attacks. Experimental results show that our method outperforms standard fine-tuning in both in-domain and out-of-domain scenarios, reducing equal error rates relative to baseline models. Notably, our best MoE-LoRA model lowers the average out-of-domain EER from 8.55\% to 6.08\%, demonstrating its effectiveness in achieving generalizable audio deepfake detection.
Key Contributions
- Sparse Mixture-of-LoRA-Experts (MoE-LoRA) framework integrated into Wav2Vec2 attention layers for audio deepfake detection
- Sparsely gated routing mechanism that dynamically selects subsets of LoRA experts, enabling specialization across diverse spoofing cues
- Demonstrated improved in-domain and out-of-domain generalization, reducing average out-of-domain EER from 8.55% to 6.08% over the baseline
🛡️ Threat Analysis
Proposes a novel neural architecture for detecting AI-generated (deepfake) audio — a direct contribution to output integrity and AI-generated content detection. The paper's primary focus is improving generalization of deepfake audio detectors across unseen synthesis methods.