defense 2026

IO-RAE: Information-Obfuscation Reversible Adversarial Example for Audio Privacy Protection

Jiajie Zhu 1, Xia Du 1,2, Xiaoyuan Liu 1, Jizhe Zhou 2, Qizhen Xu 1, Zheng Lin 3, Chi-Man Pun 4

0 citations · 56 references · arXiv

α

Published on arXiv

2601.01239

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

Achieves 96.5% targeted and 100% untargeted ASR misguidance rates including against Google's commercial black-box ASR, while recovering original audio at PESQ 4.45 with 0% transcription error rate

IO-RAE (Cumulative Signal Attack)

Novel technique introduced


The rapid advancements in artificial intelligence have significantly accelerated the adoption of speech recognition technology, leading to its widespread integration across various applications. However, this surge in usage also highlights a critical issue: audio data is highly vulnerable to unauthorized exposure and analysis, posing significant privacy risks for businesses and individuals. This paper introduces an Information-Obfuscation Reversible Adversarial Example (IO-RAE) framework, the pioneering method designed to safeguard audio privacy using reversible adversarial examples. IO-RAE leverages large language models to generate misleading yet contextually coherent content, effectively preventing unauthorized eavesdropping by humans and Automatic Speech Recognition (ASR) systems. Additionally, we propose the Cumulative Signal Attack technique, which mitigates high-frequency noise and enhances attack efficacy by targeting low-frequency signals. Our approach ensures the protection of audio data without degrading its quality or our ability. Experimental evaluations demonstrate the superiority of our method, achieving a targeted misguidance rate of 96.5% and a remarkable 100% untargeted misguidance rate in obfuscating target keywords across multiple ASR models, including a commercial black-box system from Google. Furthermore, the quality of the recovered audio, measured by the Perceptual Evaluation of Speech Quality score, reached 4.45, comparable to high-quality original recordings. Notably, the recovered audio processed by ASR systems exhibited an error rate of 0%, indicating nearly lossless recovery. These results highlight the practical applicability and effectiveness of our IO-RAE framework in protecting sensitive audio privacy.


Key Contributions

  • IO-RAE framework: reversible adversarial audio examples that obfuscate content for both humans and ASR systems while allowing authorized recovery
  • Cumulative Signal Attack technique that targets low-frequency components to reduce audible noise artifacts and enhance attack efficacy
  • Integration of LLMs to generate contextually coherent misleading transcription targets, achieving 96.5% targeted and 100% untargeted misguidance across multiple ASR systems

🛡️ Threat Analysis

Input Manipulation Attack

IO-RAE crafts adversarial perturbations on audio inputs that cause ASR models to produce incorrect transcriptions at inference time — this is a targeted evasion/misclassification attack. The Cumulative Signal Attack is a novel low-frequency perturbation technique specifically designed to maximize ASR misguidance rates, directly fitting the adversarial example paradigm.


Details

Domains
audio
Model Types
transformer
Threat Tags
black_boxinference_timetargeteduntargeteddigital
Applications
automatic speech recognitionaudio privacy protection