defense 2026

Chain-of-Authorization: Internalizing Authorization into Large Language Models via Reasoning Trajectories

Yang Li 1, Yule Liu 2, Xinlei He 3, Youjian Zhao 1, Qi Li 1, Ke Xu 1

0 citations

α

Published on arXiv

2603.22869

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Maintains comparable utility in authorized scenarios while achieving high rejection rates against unauthorized and adversarial access attempts

Chain-of-Authorization (CoA)

Novel technique introduced


Large Language Models (LLMs) have become core cognitive components in modern artificial intelligence (AI) systems, combining internal knowledge with external context to perform complex tasks. However, LLMs typically treat all accessible data indiscriminately, lacking inherent awareness of knowledge ownership and access boundaries. This deficiency heightens risks of sensitive data leakage and adversarial manipulation, potentially enabling unauthorized system access and severe security crises. Existing protection strategies rely on rigid, uniform defense that prevent dynamic authorization. Structural isolation methods faces scalability bottlenecks, while prompt guidance methods struggle with fine-grained permissions distinctions. Here, we propose the Chain-of-Authorization (CoA) framework, a secure training and reasoning paradigm that internalizes authorization logic into LLMs' core capabilities. Unlike passive external defneses, CoA restructures the model's information flow: it embeds permission context at input and requires generating explicit authorization reasoning trajectory that includes resource review, identity resolution, and decision-making stages before final response. Through supervised fine-tuning on data covering various authorization status, CoA integrates policy execution with task responses, making authorization a causal prerequisite for substantive responses. Extensive evaluations show that CoA not only maintains comparable utility in authorized scenarios but also overcomes the cognitive confusion when permissions mismatches. It exhibits high rejection rates against various unauthorized and adversarial access. This mechanism leverages LLMs' reasoning capability to perform dynamic authorization, using natural language understanding as a proactive security mechanism for deploying reliable LLMs in modern AI systems.


Key Contributions

  • Chain-of-Authorization (CoA) framework that embeds authorization logic into LLM reasoning via structured trajectories (resource review, identity resolution, decision-making)
  • Supervised fine-tuning paradigm on authorization-labeled data that makes permission checking a causal prerequisite for responses
  • Demonstrates high rejection rates against unauthorized and adversarial access while maintaining utility in authorized scenarios

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_time
Applications
rag systemsai agentsmulti-user ai systems