defense 2026

LOGER: Local--Global Ensemble for Robust Deepfake Detection in the Wild

Fei Wu 1, Dagong Lu 2, Mufeng Yao 2, Xinlei Xu 2, Fengjun Guo 2

0 citations

α

Published on arXiv

2604.03558

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieved 2nd place in NTIRE 2026 Robust Deepfake Detection Challenge with strong robustness across diverse manipulation methods and real-world degradations

LOGER

Novel technique introduced


Robust deepfake detection in the wild remains challenging due to the ever-growing variety of manipulation techniques and uncontrolled real-world degradations. Forensic cues for deepfake detection reside at two complementary levels: global-level anomalies in semantics and statistics that require holistic image understanding, and local-level forgery traces concentrated in manipulated regions that are easily diluted by global averaging. Since no single backbone or input scale can effectively cover both levels, we propose LOGER, a LOcal--Global Ensemble framework for Robust deepfake detection. The global branch employs heterogeneous vision foundation model backbones at multiple resolutions to capture holistic anomalies with diverse visual priors. The local branch performs patch-level modeling with a Multiple Instance Learning top-$k$ aggregation strategy that selectively pools only the most suspicious regions, mitigating evidence dilution caused by the dominance of normal patches; dual-level supervision at both the aggregated image level and individual patch level keeps local responses discriminative. Because the two branches differ in both granularity and backbone, their errors are largely decorrelated, a property that logit-space fusion exploits for more robust prediction. LOGER achieves 2nd place in the NTIRE 2026 Robust Deepfake Detection Challenge, and further evaluation on multiple public benchmarks confirms its strong robustness and generalization across diverse manipulation methods and real-world degradation conditions.


Key Contributions

  • Local-global ensemble architecture combining heterogeneous vision foundation models for holistic analysis with patch-level MIL for local forgery traces
  • Top-k aggregation strategy that selectively pools most suspicious regions to avoid evidence dilution from normal patches
  • Dual-level supervision at both image and patch granularity to maintain discriminative local responses

🛡️ Threat Analysis

Output Integrity Attack

Focuses on detecting AI-generated visual content (deepfakes) to verify content authenticity and integrity — this is output integrity and AI-generated content detection.


Details

Domains
visionmultimodal
Model Types
cnntransformer
Threat Tags
inference_timedigital
Datasets
NTIRE 2026 Robust Deepfake Detection Challenge
Applications
deepfake detectionmedia forensicscontent authentication