Analyzing Reasoning Shifts in Audio Deepfake Detection under Adversarial Attacks: The Reasoning Tax versus Shield Bifurcation
Binh Nguyen 1, Thai Le 2
Published on arXiv
2601.03615
Input Manipulation Attack
OWASP ML Top 10 — ML01
Output Integrity Attack
OWASP ML Top 10 — ML09
Key Finding
Cognitive dissonance signals adversarial manipulation in 78.2% of successful attacks; linguistic attacks produce dangerous rationalization traps where models confidently justify incorrect verdicts with high coherence
Cognitive Dissonance Forensic Auditing Framework
Novel technique introduced
Audio Language Models (ALMs) offer a promising shift towards explainable audio deepfake detections (ADDs), moving beyond \textit{black-box} classifiers by providing some level of transparency into their predictions via reasoning traces. This necessitates a new class of model robustness analysis: robustness of the predictive reasoning under adversarial attacks, which goes beyond existing paradigm that mainly focuses on the shifts of the final predictions (e.g., fake v.s. real). To analyze such reasoning shifts, we introduce a forensic auditing framework to evaluate the robustness of ALMs' reasoning under adversarial attacks in three inter-connected dimensions: acoustic perception, cognitive coherence, and cognitive dissonance. Our systematic analysis reveals that explicit reasoning does not universally enhance robustness. Instead, we observe a bifurcation: for models exhibiting robust acoustic perception, reasoning acts as a defensive \textit{``shield''}, protecting them from adversarial attacks. However, for others, it imposes a performance \textit{``tax''}, particularly under linguistic attacks which reduce cognitive coherence and increase attack success rate. Crucially, even when classification fails, high cognitive dissonance can serve as a \textit{silent alarm}, flagging potential manipulation. Overall, this work provides a critical evaluation of the role of reasoning in forensic audio deepfake analysis and its vulnerabilities.
Key Contributions
- Three-tier forensic auditing framework measuring acoustic perception, cognitive coherence, and cognitive dissonance of ALMs under adversarial attacks
- Discovery of a reasoning tax vs. shield bifurcation: Chain-of-Thought reasoning defends acoustically-grounded models but degrades performance in hallucination-prone models via rationalization traps
- Cognitive dissonance metric as a silent alarm that correctly flags adversarial manipulation in up to 78.2% of successful attacks even when the final classification is compromised
🛡️ Threat Analysis
Core analysis is of adversarial attacks — acoustic perturbations and linguistic attacks — on Audio Language Models that degrade classification performance and corrupt reasoning traces at inference time.
Target systems are audio deepfake detectors (AI-generated content detection), and the paper directly investigates when and how these output integrity systems fail under adversarial pressure.