defense 2025

3-Tracer: A Tri-level Temporal-Aware Framework for Audio Forgery Detection and Localization

Shuhan Xia , Xuannan Liu , Xing Cui , Peipei Li

0 citations · 33 references · arXiv

α

Published on arXiv

2511.21237

Output Integrity Attack

OWASP ML Top 10 — ML09

Key Finding

Achieves state-of-the-art performance on three challenging partial audio forgery detection datasets.

T3-Tracer

Novel technique introduced


Recently, partial audio forgery has emerged as a new form of audio manipulation. Attackers selectively modify partial but semantically critical frames while preserving the overall perceptual authenticity, making such forgeries particularly difficult to detect. Existing methods focus on independently detecting whether a single frame is forged, lacking the hierarchical structure to capture both transient and sustained anomalies across different temporal levels. To address these limitations, We identify three key levels relevant to partial audio forgery detection and present T3-Tracer, the first framework that jointly analyzes audio at the frame, segment, and audio levels to comprehensively detect forgery traces. T3-Tracer consists of two complementary core modules: the Frame-Audio Feature Aggregation Module (FA-FAM) and the Segment-level Multi-Scale Discrepancy-Aware Module (SMDAM). FA-FAM is designed to detect the authenticity of each audio frame. It combines both frame-level and audio-level temporal information to detect intra-frame forgery cues and global semantic inconsistencies. To further refine and correct frame detection, we introduce SMDAM to detect forgery boundaries at the segment level. It adopts a dual-branch architecture that jointly models frame features and inter-frame differences across multi-scale temporal windows, effectively identifying abrupt anomalies that appeared on the forged boundaries. Extensive experiments conducted on three challenging datasets demonstrate that our approach achieves state-of-the-art performance.


Key Contributions

  • T3-Tracer: the first framework to jointly analyze audio at frame, segment, and audio levels for partial forgery detection and localization
  • FA-FAM (Frame-Audio Feature Aggregation Module): combines frame-level and global audio-level temporal context to detect intra-frame forgery cues and semantic inconsistencies
  • SMDAM (Segment-level Multi-Scale Discrepancy-Aware Module): dual-branch architecture modeling frame features and inter-frame differences across multi-scale windows to localize forgery boundaries

🛡️ Threat Analysis

Output Integrity Attack

Proposes a novel forensic detection architecture for AI-manipulated audio content — specifically partial audio forgeries — which falls squarely under output integrity and AI-generated content detection. The paper introduces new architectural modules (FA-FAM, SMDAM) rather than merely applying existing methods to a new domain.


Details

Domains
audio
Model Types
transformer
Threat Tags
inference_timedigital
Applications
audio forgery detectionpartial audio manipulation localizationaudio forensics