benchmark 2025

The Deepfake Detective: Interpreting Neural Forensics Through Sparse Features and Manifolds

Subramanyam Sahoo 1, Jared Junkin 2

0 citations · 12 references · arXiv

α

Published on arXiv

2512.21670

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

A small fraction of latent features drive deepfake discrimination, and the feature manifold's intrinsic dimensionality, curvature, and selectivity vary systematically across different forensic artifact types, revealing layer-wise specialization in a VLM-based deepfake detector.

Forensic Manifold Analysis with Sparse Autoencoders

Novel technique introduced


Deepfake detection models have achieved high accuracy in identifying synthetic media, but their decision processes remain largely opaque. In this paper we present a mechanistic interpretability framework for deepfake detection applied to a vision-language model. Our approach combines a sparse autoencoder (SAE) analysis of internal network representations with a novel forensic manifold analysis that probes how the model's features respond to controlled forensic artifact manipulations. We demonstrate that only a small fraction of latent features are actively used in each layer, and that the geometric properties of the model's feature manifold, including intrinsic dimensionality, curvature, and feature selectivity, vary systematically with different types of deepfake artifacts. These insights provide a first step toward opening the "black box" of deepfake detectors, allowing us to identify which learned features correspond to specific forensic artifacts and to guide the development of more interpretable and robust models.


Key Contributions

  • First application of sparse autoencoders (SAE) to interpret the internal representations of a deepfake detection model, quantifying active features, activation frequency, and layer-wise sparsity
  • Forensic manifold analysis framework that measures intrinsic dimensionality, curvature, and feature selectivity under four controlled deepfake artifact types (geometric warp, lighting inconsistency, boundary blur, color mismatch)
  • Demonstration that only a small fraction of latent features are actively used per layer and that manifold geometry varies systematically with artifact type in a 2B-parameter VLM

🛡️ Threat Analysis

Output Integrity Attack

Deepfake detection is a canonical ML09 task (detecting AI-generated/synthetic content). This paper proposes an interpretability framework specifically to understand what forensic features a deepfake detector has learned — directly serving the goal of building more robust and reliable synthetic media detectors.


Details

Domains
visionmultimodal
Model Types
vlmtransformer
Threat Tags
inference_time
Datasets
FaceForensics++
Applications
deepfake detectionsynthetic media forensics