defense 2026

SNAP: Speaker Nulling for Artifact Projection in Speech Deepfake Detection

Kyudan Jung 1,2, Jihwan Kim 1,2, Minwoo Lee 1, Soyoon Kim 2, Jeonghoon Kim 2, Jaegul Choo 1, Cheonbok Park 1,2

0 citations

α

Published on arXiv

2603.20686

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves state-of-the-art detection performance with robust generalization to unseen speakers and TTS architectures using only 2,049-parameter logistic regression on speaker-nulled features

SNAP

Novel technique introduced


Recent advancements in text-to-speech technologies enable generating high-fidelity synthetic speech nearly indistinguishable from real human voices. While recent studies show the efficacy of self-supervised learning-based speech encoders for deepfake detection, these models struggle to generalize across unseen speakers. Our quantitative analysis suggests these encoder representations are substantially influenced by speaker information, causing detectors to exploit speaker-specific correlations rather than artifact-related cues. We call this phenomenon speaker entanglement. To mitigate this reliance, we introduce SNAP, a speaker-nulling framework. We estimate a speaker subspace and apply orthogonal projection to suppress speaker-dependent components, isolating synthesis artifacts within the residual features. By reducing speaker entanglement, SNAP encourages detectors to focus on artifact-related patterns, leading to state-of-the-art performance.


Key Contributions

  • Identifies and quantifies 'speaker entanglement' phenomenon where SSL representations are dominated by speaker identity rather than synthesis artifacts
  • Proposes SNAP framework using orthogonal subspace projection to nullify speaker information and isolate synthesis artifacts
  • Achieves state-of-the-art deepfake detection with only a linear classifier (2,049 parameters) by operating on speaker-nulled residual features

🛡️ Threat Analysis

Output Integrity Attack

The paper addresses detection of AI-generated speech content (deepfakes) to verify audio authenticity — this is output integrity and content provenance. The goal is to distinguish synthetic speech from real recordings, which falls under verifying/authenticating model outputs and detecting AI-generated content.


Details

Domains
audio
Model Types
transformer
Threat Tags
inference_time
Datasets
ASVspoof
Applications
speech deepfake detectionvoice phishing preventionaudio content authentication