Towards Explicit Acoustic Evidence Perception in Audio LLMs for Speech Deepfake Detection
Xiaoxuan Guo 1,2, Yuankun Xie 1,2, Haonan Cheng 1, Jiayi Zhou 2, Jian Liu 2, Hengyan Huang 1, Long Ye 1, Qin Zhang 1
Published on arXiv
2601.23066
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
SDD-APALLM achieves consistent gains in detection accuracy and robustness over audio-only LLM baselines, particularly for fake speech with natural semantics that bypasses semantic-dominant detectors.
SDD-APALLM
Novel technique introduced
Speech deepfake detection (SDD) focuses on identifying whether a given speech signal is genuine or has been synthetically generated. Existing audio large language model (LLM)-based methods excel in content understanding; however, their predictions are often biased toward semantically correlated cues, which results in fine-grained acoustic artifacts being overlooked during the decisionmaking process. Consequently, fake speech with natural semantics can bypass detectors despite harboring subtle acoustic anomalies; this suggests that the challenge stems not from the absence of acoustic data, but from its inadequate accessibility when semantic-dominant reasoning prevails. To address this issue, we investigate SDD within the audio LLM paradigm and introduce SDD with Auditory Perception-enhanced Audio Large Language Model (SDD-APALLM), an acoustically enhanced framework designed to explicitly expose fine-grained time-frequency evidence as accessible acoustic cues. By combining raw audio with structured spectrograms, the proposed framework empowers audio LLMs to more effectively capture subtle acoustic inconsistencies without compromising their semantic understanding. Experimental results indicate consistent gains in detection accuracy and robustness, especially in cases where semantic cues are misleading. Further analysis reveals that these improvements stem from a coordinated utilization of semantic and acoustic information, as opposed to simple modality aggregation.
Key Contributions
- Identifies that audio LLM-based SDD suffers from semantic shortcut bias, causing fine-grained acoustic artifacts in synthetically natural speech to be overlooked
- Introduces SDD-APALLM, which combines raw audio with CQT-based spectrogram visual tokens to explicitly surface time-frequency acoustic evidence during LLM reasoning
- Demonstrates consistent in-domain and cross-domain detection gains, especially on semantically natural fake speech where audio-only LLMs fail
🛡️ Threat Analysis
Proposes a novel architecture for detecting AI-generated/synthetic speech (speech deepfake detection), which is a core ML09 concern — AI-generated content detection. The contribution is a new forensic detection framework (SDD-APALLM), not merely an application of existing methods to a new domain.