defense 2025

3D Guard-Layer: An Integrated Agentic AI Safety System for Edge Artificial Intelligence

Eren Kurshan 1, Yuan Xie 2, Paul Franzon 3

0 citations · arXiv

α

Published on arXiv

2511.08842

Input Manipulation Attack

OWASP ML Top 10 — ML01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

The co-located 3D-integrated safety layer improves resilience against network-based attacks on edge AI systems with minimal hardware overhead.

3D Guard-Layer

Novel technique introduced


AI systems have found a wide range of real-world applications in recent years. The adoption of edge artificial intelligence, embedding AI directly into edge devices, is rapidly growing. Despite the implementation of guardrails and safety mechanisms, security vulnerabilities and challenges have become increasingly prevalent in this domain, posing a significant barrier to the practical deployment and safety of AI systems. This paper proposes an agentic AI safety architecture that leverages 3D to integrate a dedicated safety layer. It introduces an adaptive AI safety infrastructure capable of dynamically learning and mitigating attacks against the AI system. The system leverages the inherent advantages of co-location with the edge computing hardware to continuously monitor, detect and proactively mitigate threats to the AI system. The integration of local processing and learning capabilities enhances resilience against emerging network-based attacks while simultaneously improving system reliability, modularity, and performance, all with minimal cost and 3D integration overhead.


Key Contributions

  • 3D chip-stacked hardware integration of a co-located safety monitoring layer with edge AI accelerators
  • Adaptive, agent-based threat detection and mitigation system that continuously learns emerging attack patterns
  • Safety architecture designed for low-overhead deployment on resource-constrained edge devices

🛡️ Threat Analysis

Input Manipulation Attack

The safety architecture is specifically designed to monitor, detect, and mitigate attacks against deployed edge AI systems at inference time, including 'network-based attacks' — the primary threat vector described aligns with inference-time input manipulation and evasion attacks on edge AI models.


Details

Domains
visionnlp
Model Types
traditional_mltransformer
Threat Tags
inference_timedigital
Applications
edge ai inferenceembedded ai systems