defense 2025

AuViRe: Audio-visual Speech Representation Reconstruction for Deepfake Temporal Localization

Christos Koutlis 1,2, Symeon Papadopoulos 1,2

1 citations · 1 influential · 42 references · arXiv

α

Published on arXiv

2511.18993

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

AuViRe outperforms prior state-of-the-art by +8.9 AP@0.95 on LAV-DF, +9.6 AP@0.5 on AV-Deepfake1M, and +5.1 AUC in an in-the-wild experiment for audio-visual deepfake temporal localization.

AuViRe

Novel technique introduced


With the rapid advancement of sophisticated synthetic audio-visual content, e.g., for subtle malicious manipulations, ensuring the integrity of digital media has become paramount. This work presents a novel approach to temporal localization of deepfakes by leveraging Audio-Visual Speech Representation Reconstruction (AuViRe). Specifically, our approach reconstructs speech representations from one modality (e.g., lip movements) based on the other (e.g., audio waveform). Cross-modal reconstruction is significantly more challenging in manipulated video segments, leading to amplified discrepancies, thereby providing robust discriminative cues for precise temporal forgery localization. AuViRe outperforms the state of the art by +8.9 AP@0.95 on LAV-DF, +9.6 AP@0.5 on AV-Deepfake1M, and +5.1 AUC on an in-the-wild experiment. Code available at https://github.com/mever-team/auvire.


Key Contributions

  • AuViRe architecture: cross-modal speech representation reconstruction where discrepancies between predicted and actual modality representations serve as forgery cues for temporal localization
  • State-of-the-art performance on LAV-DF (+8.9 AP@0.95) and AV-Deepfake1M (+9.6 AP@0.5) deepfake temporal localization benchmarks
  • Real-world in-the-wild evaluation pipeline handling variable video lengths, presence of non-talking subjects, and both audio/visual stream conditions (+5.1 AUC)

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel detection architecture that identifies temporally localized deepfake manipulations in audio-visual content by modeling cross-modal reconstruction discrepancies — directly addressing output integrity and AI-generated content detection (deepfake detection).


Details

Domains
multimodalaudiovision
Model Types
transformermultimodal
Threat Tags
inference_timedigital
Datasets
LAV-DFAV-Deepfake1M
Applications
audio-visual deepfake detectiontemporal forgery localizationdigital media integrity