defense 2025

AI Kill Switch for malicious web-based LLM agent

Sechan Lee 1, Sangdon Park 2

0 citations · 66 references · arXiv

α

Published on arXiv

2511.13725

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

AutoGuard achieves over 80% Defense Success Rate against diverse malicious LLM agents including GPT-4o, Claude 4.5 Sonnet, and generalizes to GPT-5.1, Gemini 2.5 Flash, and Llama-3.3-70B-abliterated with minimal degradation for benign agents.

AutoGuard

Novel technique introduced


Recently, web-based Large Language Model (LLM) agents autonomously perform increasingly complex tasks, thereby bringing significant convenience. However, they also amplify the risks of malicious misuse cases such as unauthorized collection of personally identifiable information (PII), generation of socially divisive content, and even automated web hacking. To address these threats, we propose an AI Kill Switch technique that can immediately halt the operation of malicious web-based LLM agents. To achieve this, we introduce AutoGuard - the key idea is generating defensive prompts that trigger the safety mechanisms of malicious LLM agents. In particular, generated defense prompts are transparently embedded into the website's DOM so that they remain invisible to human users but can be detected by the crawling process of malicious agents, triggering its internal safety mechanisms to abort malicious actions once read. To evaluate our approach, we constructed a dedicated benchmark consisting of three representative malicious scenarios. Experimental results show that AutoGuard achieves over 80% Defense Success Rate (DSR) across diverse malicious agents, including GPT-4o, Claude-4.5-Sonnet and generalizes well to advanced models like GPT-5.1, Gemini-2.5-flash, and Gemini-3-pro. Also, our approach demonstrates robust defense performance in real-world website environments without significant performance degradation for benign agents. Through this research, we demonstrate the controllability of web-based LLM agents, thereby contributing to the broader effort of AI control and safety.


Key Contributions

  • Proposes AutoGuard, a method to generate defensive prompts that trigger the internal safety mechanisms of malicious LLM agents when encountered during web crawling.
  • Introduces an embedding strategy that injects these defensive prompts invisibly into website DOM — visible to agent crawlers but not human users.
  • Constructs a dedicated benchmark with three representative malicious scenarios (PII collection, divisive content generation, automated web hacking) and demonstrates over 80% Defense Success Rate across GPT-4o, Claude 4.5 Sonnet, GPT-5.1, Gemini, and Llama variants.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Datasets
Custom benchmark (3 malicious scenario categories)
Applications
web-based llm agentspii collection preventionautomated web hacking defense