tool 2026

Securing LLM-as-a-Service for Small Businesses: An Industry Case Study of a Distributed Chatbot Deployment Platform

Jiazhu Xie , Bowen Li , Heyu Fu , Chong Gao , Ziqi Xu , Fengling Han

0 citations · 18 references · arXiv

α

Published on arXiv

2601.15528

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Demonstrates that secure, multi-tenant LLM chatbot services with practical prompt injection defenses can be deployed on heterogeneous low-cost hardware without model retraining.


Large Language Model (LLM)-based question-answering systems offer significant potential for automating customer support and internal knowledge access in small businesses, yet their practical deployment remains challenging due to infrastructure costs, engineering complexity, and security risks, particularly in retrieval-augmented generation (RAG)-based settings. This paper presents an industry case study of an open-source, multi-tenant platform that enables small businesses to deploy customised LLM-based support chatbots via a no-code workflow. The platform is built on distributed, lightweight k3s clusters spanning heterogeneous, low-cost machines and interconnected through an encrypted overlay network, enabling cost-efficient resource pooling while enforcing container-based isolation and per-tenant data access controls. In addition, the platform integrates practical, platform-level defences against prompt injection attacks in RAG-based chatbots, translating insights from recent prompt injection research into deployable security mechanisms without requiring model retraining or enterprise-scale infrastructure. We evaluate the proposed platform through a real-world e-commerce deployment, demonstrating that secure and efficient LLM-based chatbot services can be achieved under realistic cost, operational, and security constraints faced by small businesses.


Key Contributions

  • Open-source multi-tenant LLM deployment platform for small businesses built on lightweight k3s clusters with container isolation and per-tenant data access controls
  • Platform-level prompt injection defenses for RAG-based chatbots that require no model retraining or enterprise-scale infrastructure
  • Real-world e-commerce case study demonstrating feasibility of secure LLM chatbot deployment under small-business cost and operational constraints

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
customer support chatbotsrag-based question answeringsmall business llm deployment