defense 2025

KLASSify to Verify: Audio-Visual Deepfake Detection Using SSL-based Audio and Handcrafted Visual Features

Ivan Kukanov , Jun Wah Ng

0 citations

α

Published on arXiv

2508.07337

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Multimodal system achieves 92.78% AUC for deepfake classification and 0.3536 IoU for temporal localization (audio-only) on AV-Deepfake1M++

KLASSify

Novel technique introduced


The rapid development of audio-driven talking head generators and advanced Text-To-Speech (TTS) models has led to more sophisticated temporal deepfakes. These advances highlight the need for robust methods capable of detecting and localizing deepfakes, even under novel, unseen attack scenarios. Current state-of-the-art deepfake detectors, while accurate, are often computationally expensive and struggle to generalize to novel manipulation techniques. To address these challenges, we propose multimodal approaches for the AV-Deepfake1M 2025 challenge. For the visual modality, we leverage handcrafted features to improve interpretability and adaptability. For the audio modality, we adapt a self-supervised learning (SSL) backbone coupled with graph attention networks to capture rich audio representations, improving detection robustness. Our approach strikes a balance between performance and real-world deployment, focusing on resilience and potential interpretability. On the AV-Deepfake1M++ dataset, our multimodal system achieves AUC of 92.78% for deepfake classification task and IoU of 0.3536 for temporal localization using only the audio modality.


Key Contributions

  • SSL backbone coupled with graph attention networks (GAT) for robust audio deepfake representation and detection
  • Handcrafted visual features combined with temporal convolution networks (TCN) for interpretable, generalizable video deepfake detection
  • Multimodal fusion pipeline with score calibration achieving 92.78% AUC on AV-Deepfake1M++ classification and 0.3536 IoU for temporal localization

🛡️ Threat Analysis

Output Integrity Attack

Proposes a deepfake detection and temporal localization system targeting AI-generated audio-visual content (talking-head generators + TTS manipulation), which falls directly under output integrity and AI-generated content detection.


Details

Domains
audiovisionmultimodal
Model Types
transformergnn
Threat Tags
inference_timedigital
Datasets
AV-Deepfake1M++LAV-DF
Applications
audio-visual deepfake detectiontemporal deepfake localizationtalking-head video forensics