attack 2026

CLIOPATRA: Extracting Private Information from LLM Insights

Meenatchi Sundaram Muthu Selva Annamalai 1, Emiliano De Cristofaro 2, Peter Kairouz 3

0 citations

α

Published on arXiv

2603.09781

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Adversary knowing only target's age, gender, and one symptom extracts medical history in 39% of cases; success reaches ~100% with five symptoms or alternative LLM backends (e.g., Qwen 3), while LLM-based privacy auditors fail to detect the leakage.

CLIOPATRA

Novel technique introduced


As AI assistants become widely used, privacy-aware platforms like Anthropic's Clio have been introduced to generate insights from real-world AI use. Clio's privacy protections rely on layering multiple heuristic techniques together, including PII redaction, clustering, filtering, and LLM-based privacy auditing. In this paper, we put these claims to the test by presenting CLIOPATRA, the first privacy attack against "privacy-preserving" LLM insight systems. The attack involves a realistic adversary that carefully designs and inserts malicious chats into the system to break multiple layers of privacy protections and induce the leakage of sensitive information from a target user's chat. We evaluated CLIOPATRA on synthetically generated medical target chats, demonstrating that an adversary who knows only the basic demographics of a target user and a single symptom can successfully extract the user's medical history in 39% of cases by just inspecting Clio's output. Furthermore, CLIOPATRA can reach close to 100% when Clio is configured with other state-of-the-art models and the adversary's knowledge of the target user is increased. We also show that existing ad hoc mitigations, such as LLM-based privacy auditing, are unreliable and fail to detect major leaks. Our findings indicate that even when layered, current heuristic protections are insufficient to adequately protect user data in LLM-based analysis systems.


Key Contributions

  • First privacy attack (CLIOPATRA) against privacy-preserving LLM insight systems, specifically targeting Anthropic's Clio by injecting malicious chats that simultaneously bypass PII redaction, clustering, filtering, and LLM-based privacy auditing
  • Empirical demonstration that an adversary with only basic demographics and one known symptom can extract a target's full medical history in 39% of cases, rising to ~100% with stronger adversarial knowledge or alternative LLM configurations
  • Shows that existing LLM-based privacy audits are unreliable at detecting major leaks, and analyzes differential privacy as a more principled mitigation

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
WildChatsynthetic medical chats
Applications
llm analytics platformsprivacy-preserving ai insight systemsmedical information