tool 2025

Towards Reliable Audio Deepfake Attribution and Model Recognition: A Multi-Level Autoencoder-Based Framework

Andrea Di Pierno 1,2, Luca Guarnera 2, Dario Allegra 2, Sebastiano Battiato 2

0 citations

α

Published on arXiv

2508.02521

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

ADA classifier achieves F1-scores over 95% across all datasets; ADMR module reaches 96.31% macro F1 across six generative model classes under open-set conditions.

LAVA (Layered Architecture for Voice Attribution)

Novel technique introduced


The proliferation of audio deepfakes poses a growing threat to trust in digital communications. While detection methods have advanced, attributing audio deepfakes to their source models remains an underexplored yet crucial challenge. In this paper we introduce LAVA (Layered Architecture for Voice Attribution), a hierarchical framework for audio deepfake detection and model recognition that leverages attention-enhanced latent representations extracted by a convolutional autoencoder trained solely on fake audio. Two specialized classifiers operate on these features: Audio Deepfake Attribution (ADA), which identifies the generation technology, and Audio Deepfake Model Recognition (ADMR), which recognize the specific generative model instance. To improve robustness under open-set conditions, we incorporate confidence-based rejection thresholds. Experiments on ASVspoof2021, FakeOrReal, and CodecFake show strong performance: the ADA classifier achieves F1-scores over 95% across all datasets, and the ADMR module reaches 96.31% macro F1 across six classes. Additional tests on unseen attacks from ASVpoof2019 LA and error propagation analysis confirm LAVA's robustness and reliability. The framework advances the field by introducing a supervised approach to deepfake attribution and model recognition under open-set conditions, validated on public benchmarks and accompanied by publicly released models and code. Models and code are available at https://www.github.com/adipiz99/lava-framework.


Key Contributions

  • LAVA: a hierarchical autoencoder framework using attention-enhanced latent representations trained solely on fake audio for deepfake detection and attribution
  • ADA and ADMR classifiers for identifying generation technology type and specific generative model instance respectively
  • Confidence-based rejection thresholds for open-set robustness, with publicly released models and code

🛡️ Threat Analysis

Output Integrity Attack

Primary contribution is AI-generated audio content detection and attribution — identifying fake audio and tracing it to its generative model/technology. This is novel forensic architecture for content integrity and provenance, directly within ML09's scope of AI-generated content detection.


Details

Domains
audio
Model Types
cnntransformer
Threat Tags
inference_time
Datasets
ASVspoof2021FakeOrRealCodecFakeASVspoof2019 LA
Applications
audio deepfake detectionsynthetic speech attributiondigital audio forensics