benchmark 2025

Securing Agentic AI: Threat Modeling and Risk Analysis for Network Monitoring Agentic AI System

Pallavi Zambare , Venkata Nikhil Thanikella , Ying Liu

0 citations

α

Published on arXiv

2508.10043

Excessive Agency

OWASP LLM Top 10 — LLM08

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Two practical attacks on a LangChain-based agentic network monitor — resource DoS via traffic replay and memory poisoning via log tampering — produce measurable telemetry delays and computational overhead, validating MAESTRO as an operational threat-mapping methodology.

MAESTRO

Novel technique introduced


When combining Large Language Models (LLMs) with autonomous agents, used in network monitoring and decision-making systems, this will create serious security issues. In this research, the MAESTRO framework consisting of the seven layers threat modeling architecture in the system was used to expose, evaluate, and eliminate vulnerabilities of agentic AI. The prototype agent system was constructed and implemented, using Python, LangChain, and telemetry in WebSockets, and deployed with inference, memory, parameter tuning, and anomaly detection modules. Two practical threat cases were confirmed as follows: (i) resource denial of service by traffic replay denial-of-service, and (ii) memory poisoning by tampering with the historical log file maintained by the agent. These situations resulted in measurable levels of performance degradation, i.e. telemetry updates were delayed, and computational loads were increased, as a result of poor system adaptations. It was suggested to use a multilayered defense-in-depth approach with memory isolation, validation of planners and anomaly response systems in real-time. These findings verify that MAESTRO is viable in operational threat mapping, prospective risk scoring, and the basis of the resilient system design. The authors bring attention to the importance of the enforcement of memory integrity, paying attention to the adaptation logic monitoring, and cross-layer communication protection that guarantee the agentic AI reliability in adversarial settings.


Key Contributions

  • Application of the MAESTRO 7-layer threat modeling framework to LLM-augmented agentic AI systems for network monitoring, enabling structured threat localization and AI-specific risk scoring.
  • Empirical demonstration of two attack classes — traffic-replay resource DoS and historical-log memory poisoning — with quantified performance impact (telemetry delays, elevated computational load).
  • Multilayered defense-in-depth recommendations including memory isolation, planner validation, and real-time anomaly response to harden agentic AI deployments.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timetargeteddigital
Applications
network monitoringagentic ai systemscybersecurity incident response