defense 2026

Contextualized Privacy Defense for LLM Agents

Yule Wen 1, Yanzhe Zhang 2, Jianxun Lian 3, Xiaoyuan Yi 3, Xing Xie 3, Diyi Yang 4

0 citations

α

Published on arXiv

2603.02983

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

CDI achieves 94.2% privacy preservation and 80.6% helpfulness, outperforming static and guarding baselines while showing superior robustness to adversarial conditions and generalization.

CDI (Contextualized Defense Instructing)

Novel technique introduced


LLM agents increasingly act on users' personal information, yet existing privacy defenses remain limited in both design and adaptability. Most prior approaches rely on static or passive defenses, such as prompting and guarding. These paradigms are insufficient for supporting contextual, proactive privacy decisions in multi-step agent execution. We propose Contextualized Defense Instructing (CDI), a new privacy defense paradigm in which an instructor model generates step-specific, context-aware privacy guidance during execution, proactively shaping actions rather than merely constraining or vetoing them. Crucially, CDI is paired with an experience-driven optimization framework that trains the instructor via reinforcement learning (RL), where we convert failure trajectories with privacy violations into learning environments. We formalize baseline defenses and CDI as distinct intervention points in a canonical agent loop, and compare their privacy-helpfulness trade-offs within a unified simulation framework. Results show that our CDI consistently achieves a better balance between privacy preservation (94.2%) and helpfulness (80.6%) than baselines, with superior robustness to adversarial conditions and generalization.


Key Contributions

  • Proposes CDI (Contextualized Defense Instructing), a proactive privacy defense paradigm that generates step-specific, context-aware guidance at each action step of an LLM agent loop
  • Introduces an experience-driven RL optimization framework (GRPO-based) that converts failure trajectories with privacy violations into training environments for the instructor model
  • Provides a unified simulation framework formalizing and comparing prompting, guarding, and instructing as distinct intervention points, measuring privacy-helpfulness trade-offs across static and learned settings

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmrl
Threat Tags
inference_timeblack_box
Applications
llm agentspersonal information managementai assistants handling pii