defense 2025

DeepAgent: A Dual Stream Multi Agent Fusion for Robust Multimodal Deepfake Detection

Sayeem Been Zaman 1,2, Wasimul Karim 1,3,2, Arefin Ittesafun Abian 1, Reem E. Mohamed 2, Md Rafiqul Islam 2, Asif Karim 2, Sami Azam 2

0 citations · 39 references · arXiv

α

Published on arXiv

2512.07351

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Cross-dataset evaluation on DeepFakeTIMIT achieves 97.49% accuracy, with Agent-1 reaching 94.35% on combined Celeb-DF and FakeAVCeleb datasets.

DeepAgent

Novel technique introduced


The increasing use of synthetic media, particularly deepfakes, is an emerging challenge for digital content verification. Although recent studies use both audio and visual information, most integrate these cues within a single model, which remains vulnerable to modality mismatches, noise, and manipulation. To address this gap, we propose DeepAgent, an advanced multi-agent collaboration framework that simultaneously incorporates both visual and audio modalities for the effective detection of deepfakes. DeepAgent consists of two complementary agents. Agent-1 examines each video with a streamlined AlexNet-based CNN to identify the symbols of deepfake manipulation, while Agent-2 detects audio-visual inconsistencies by combining acoustic features, audio transcriptions from Whisper, and frame-reading sequences of images through EasyOCR. Their decisions are fused through a Random Forest meta-classifier that improves final performance by taking advantage of the different decision boundaries learned by each agent. This study evaluates the proposed framework using three benchmark datasets to demonstrate both component-level and fused performance. Agent-1 achieves a test accuracy of 94.35% on the combined Celeb-DF and FakeAVCeleb datasets. On the FakeAVCeleb dataset, Agent-2 and the final meta-classifier attain accuracies of 93.69% and 81.56%, respectively. In addition, cross-dataset validation on DeepFakeTIMIT confirms the robustness of the meta-classifier, which achieves a final accuracy of 97.49%, and indicates a strong capability across diverse datasets. These findings confirm that hierarchy-based fusion enhances robustness by mitigating the weaknesses of individual modalities and demonstrate the effectiveness of a multi-agent approach in addressing diverse types of manipulations in deepfakes.


Key Contributions

  • Dual-stream multi-agent framework (DeepAgent) with separate specialized agents for visual manipulation artifacts and audio-visual inconsistencies
  • Random Forest meta-classifier that fuses complementary agent decisions to exploit diverse decision boundaries and improve robustness
  • Cross-dataset validation demonstrating 97.49% accuracy on DeepFakeTIMIT without retraining, confirming generalization across diverse deepfake types

🛡️ Threat Analysis

Output Integrity Attack

DeepAgent is a novel AI-generated content detection architecture specifically targeting deepfakes across audio and visual modalities — output integrity and authenticity verification of synthetic media.


Details

Domains
visionaudiomultimodal
Model Types
cnntransformertraditional_mlmultimodal
Threat Tags
digitalinference_time
Datasets
Celeb-DFFakeAVCelebDeepFakeTIMIT
Applications
deepfake detectionaudio-visual content verificationsynthetic media forensics