defense 2025

Addressing Gradient Misalignment in Data-Augmented Training for Robust Speech Deepfake Detection

Duc-Tuan Truong 1, Tianchi Liu 2, Junjie Li 3, Ruijie Tao 2, Kong Aik Lee 3, Eng Siong Chng 1

0 citations · 33 references · arXiv

α

Published on arXiv

2509.20682

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Gradient alignment in DPDA training achieves up to 18.69% relative EER reduction on the In-the-Wild dataset while accelerating convergence compared to standard data-augmented training baselines.

DPDA (Dual-Path Data-Augmented training with Gradient Alignment)

Novel technique introduced


In speech deepfake detection (SDD), data augmentation (DA) is commonly used to improve model generalization across varied speech conditions and spoofing attacks. However, during training, the backpropagated gradients from original and augmented inputs may misalign, which can result in conflicting parameter updates. These conflicts could hinder convergence and push the model toward suboptimal solutions, thereby reducing the benefits of DA. To investigate and address this issue, we design a dual-path data-augmented (DPDA) training framework with gradient alignment for SDD. In our framework, each training utterance is processed through two input paths: one using the original speech and the other with its augmented version. This design allows us to compare and align their backpropagated gradient directions to reduce optimization conflicts. Our analysis shows that approximately 25% of training iterations exhibit gradient conflicts between the original inputs and their augmented counterparts when using RawBoost augmentation. By resolving these conflicts with gradient alignment, our method accelerates convergence by reducing the number of training epochs and achieves up to an 18.69% relative reduction in Equal Error Rate on the In-the-Wild dataset compared to the baseline.


Key Contributions

  • Identifies and quantifies gradient conflicts between original and augmented inputs during data-augmented SDD training (~25% of iterations with RawBoost augmentation)
  • Proposes DPDA: a dual-path training framework that compares and aligns gradient directions from original vs. augmented speech to reduce optimization conflicts
  • Demonstrates architecture-agnostic improvements across multiple SDD backbones, augmentation strategies, and benchmark datasets, with up to 18.69% relative EER reduction on In-the-Wild

🛡️ Threat Analysis

Output Integrity Attack

The paper's primary contribution is improving the detection of AI-generated/synthetic speech (speech deepfakes), which falls squarely under output integrity and AI-generated content detection. The novel DPDA training framework is specifically designed and evaluated for SDD, making this an architectural/methodological advance in deepfake detection rather than a mere domain application of existing methods.


Details

Domains
audio
Model Types
transformercnn
Threat Tags
inference_time
Datasets
ASVspoofIn-the-Wild
Applications
speech deepfake detectionanti-spoofing