benchmark 2026

Aegis: Towards Governance, Integrity, and Security of AI Voice Agents

Xiang Li 1, Pin-Yu Chen 2, Wenqi Wei 1

0 citations · 16 references · arXiv (Cornell University)

α

Published on arXiv

2602.07379

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Voice agents remain vulnerable to behavioral attacks that bypass access controls even under strict restrictions, with open-weight models exhibiting systematically higher susceptibility than proprietary models.

Aegis

Novel technique introduced


With the rapid advancement and adoption of Audio Large Language Models (ALLMs), voice agents are now being deployed in high-stakes domains such as banking, customer service, and IT support. However, their vulnerabilities to adversarial misuse still remain unexplored. While prior work has examined aspects of trustworthiness in ALLMs, such as harmful content generation and hallucination, systematic security evaluations of voice agents are still lacking. To address this gap, we propose Aegis, a red-teaming framework for the governance, integrity, and security of voice agents. Aegis models the realistic deployment pipeline of voice agents and designs structured adversarial scenarios of critical risks, including privacy leakage, privilege escalation, resource abuse, etc. We evaluate the framework through case studies in banking call centers, IT Support, and logistics. Our evaluation shows that while access controls mitigate data-level risks, voice agents remain vulnerable to behavioral attacks that cannot be addressed through access restrictions alone, even under strict access controls. We observe systematic differences across model families, with open-weight models exhibiting higher susceptibility, underscoring the need for layered defenses that combine access control, policy enforcement, and behavioral monitoring to secure next-generation voice agents.


Key Contributions

  • Aegis: the first systematic red-teaming framework for ALLM-powered voice agents, modeling realistic deployment pipelines with external APIs and multi-turn workflows
  • A structured adversarial taxonomy covering privacy leakage, privilege escalation, resource abuse, and authentication bypass, grounded in MITRE ATT&CK tactics
  • Case studies in banking, IT support, and logistics revealing that behavioral attacks persist even under strict access controls, with open-weight models showing higher susceptibility

🛡️ Threat Analysis


Details

Domains
audiomultimodalnlp
Model Types
llm
Threat Tags
black_boxinference_time
Applications
voice agentsbanking call centersit supportlogistics dispatch