Published on arXiv
2508.02961
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Achieves perfect or near-perfect defense success rates against prompt injection in Enhanced Mode across seven evaluated LLMs while remaining lightweight compared to external classifier approaches.
Self-Consciousness Defense (Meta-Cognitive and Arbitration Modules)
Novel technique introduced
This paper introduces a novel self-consciousness defense mechanism for Large Language Models (LLMs) to combat prompt injection attacks. Unlike traditional approaches that rely on external classifiers, our method leverages the LLM's inherent reasoning capabilities to perform self-protection. We propose a framework that incorporates Meta-Cognitive and Arbitration Modules, enabling LLMs to evaluate and regulate their own outputs autonomously. Our approach is evaluated on seven state-of-the-art LLMs using two datasets: AdvBench and Prompt-Injection-Mixed-Techniques-2024. Experiment results demonstrate significant improvements in defense success rates across models and datasets, with some achieving perfect and near-perfect defense in Enhanced Mode. We also analyze the trade-off between defense success rate improvement and computational overhead. This self-consciousness method offers a lightweight, cost-effective solution for enhancing LLM ethics, particularly beneficial for GenAI use cases across various platforms.
Key Contributions
- Self-consciousness framework that leverages LLMs' inherent reasoning to detect and block prompt injection without external classifiers
- Meta-Cognitive and Arbitration Modules enabling autonomous output evaluation and regulation
- Evaluation across seven state-of-the-art LLMs on two prompt injection datasets showing near-perfect defense rates in Enhanced Mode with analysis of computational trade-offs