defense 2026

X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection

Youngseo Kim , Kwan Yun , Seokhyeon Hong , Sihun Cha , Colette Suhjung Koo , Junyong Noh

0 citations · CVPR

α

Published on arXiv

2603.08483

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

X-AVDT achieves 13.1% accuracy improvement over existing deepfake detectors and generalizes strongly to unseen generators across external benchmarks.

X-AVDT

Novel technique introduced


The surge of highly realistic synthetic videos produced by contemporary generative systems has significantly increased the risk of malicious use, challenging both humans and existing detectors. Against this backdrop, we take a generator-side view and observe that internal cross-attention mechanisms in these models encode fine-grained speech-motion alignment, offering useful correspondence cues for forgery detection. Building on this insight, we propose X-AVDT, a robust and generalizable deepfake detector that probes generator-internal audio-visual signals accessed via DDIM inversion to expose these cues. X-AVDT extracts two complementary signals: (i) a video composite capturing inversion-induced discrepancies, and (ii) an audio-visual cross-attention feature reflecting modality alignment enforced during generation. To enable faithful cross-generator evaluation, we further introduce MMDF, a new multimodal deepfake dataset spanning diverse manipulation types and rapidly evolving synthesis paradigms, including GANs, diffusion, and flow-matching. Extensive experiments demonstrate that X-AVDT achieves leading performance on MMDF and generalizes strongly to external benchmarks and unseen generators, outperforming existing methods with accuracy improved by 13.1%. Our findings highlight the importance of leveraging internal audio-visual consistency cues for robustness to future generators in deepfake detection.


Key Contributions

  • X-AVDT: a deepfake detector that uses DDIM inversion to probe generator-internal audio-visual cross-attention signals, exposing speech-motion alignment artifacts as forgery cues
  • A video composite signal capturing inversion-induced discrepancies combined with cross-attention modality alignment features for complementary detection
  • MMDF: a new multimodal deepfake benchmark spanning GANs, diffusion, and flow-matching generators for cross-generator generalization evaluation

🛡️ Threat Analysis

Output Integrity Attack

Primary contribution is a novel deepfake detection method (X-AVDT) that authenticates AI-generated video content by exploiting internal cross-attention alignment cues from generative models — directly addresses output integrity and AI-generated content detection.


Details

Domains
visionaudiomultimodalgenerative
Model Types
diffusionganmultimodaltransformer
Threat Tags
inference_time
Datasets
MMDF
Applications
video deepfake detectionaudio-visual forgery detection