Safer Policy Compliance with Dynamic Epistemic Fallback
Joseph Marvin Imperial 1,2, Harish Tayyar Madabushi 1
Published on arXiv
2601.23094
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
DEF achieves up to 100% detection rate of perturbed policy texts with DeepSeek-R1, significantly improving LLM robustness against maliciously modified legal documents.
Dynamic Epistemic Fallback (DEF)
Novel technique introduced
Humans develop a series of cognitive defenses, known as epistemic vigilance, to combat risks of deception and misinformation from everyday interactions. Developing safeguards for LLMs inspired by this mechanism might be particularly helpful for their application in high-stakes tasks such as automating compliance with data privacy laws. In this paper, we introduce Dynamic Epistemic Fallback (DEF), a dynamic safety protocol for improving an LLM's inference-time defenses against deceptive attacks that make use of maliciously perturbed policy texts. Through various levels of one-sentence textual cues, DEF nudges LLMs to flag inconsistencies, refuse compliance, and fallback to their parametric knowledge upon encountering perturbed policy texts. Using globally recognized legal policies such as HIPAA and GDPR, our empirical evaluations report that DEF effectively improves the capability of frontier LLMs to detect and refuse perturbed versions of policies, with DeepSeek-R1 achieving a 100% detection rate in one setting. This work encourages further efforts to develop cognitively inspired defenses to improve LLM robustness against forms of harm and deception that exploit legal artifacts.
Key Contributions
- Dynamic Epistemic Fallback (DEF): an inference-time protocol that uses one-sentence textual cues to nudge LLMs to flag inconsistencies in policy texts and fall back to parametric knowledge
- Empirical evaluation of frontier LLMs (including DeepSeek-R1) against maliciously perturbed GDPR and HIPAA policy documents
- Cognitively-inspired defense framing drawing on the human concept of epistemic vigilance to improve LLM robustness against deceptive legal artifact manipulation