attack 2026

Extracting Training Dialogue Data from Large Language Model based Task Bots

Shuo Zhang , Junzhou Zhao , Junji Hou , Pinghui Wang , Chenxu Wang , Jing Tao

0 citations

α

Published on arXiv

2603.01550

Model Inversion Attack

OWASP ML Top 10 — ML03

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Proposed extraction attack recovers thousands of training dialogue state labels with best-case precision exceeding 70% against LLM-based task bots.


Large Language Models (LLMs) have been widely adopted to enhance Task-Oriented Dialogue Systems (TODS) by modeling complex language patterns and delivering contextually appropriate responses. However, this integration introduces significant privacy risks, as LLMs, functioning as soft knowledge bases that compress extensive training data into rich knowledge representations, can inadvertently memorize training dialogue data containing not only identifiable information such as phone numbers but also entire dialogue-level events like complete travel schedules. Despite the critical nature of this privacy concern, how LLM memorization is inherited in developing task bots remains unexplored. In this work, we address this gap through a systematic quantitative study that involves evaluating existing training data extraction attacks, analyzing key characteristics of task-oriented dialogue modeling that render existing methods ineffective, and proposing novel attack techniques tailored for LLM-based TODS that enhance both response sampling and membership inference. Experimental results demonstrate the effectiveness of our proposed data extraction attack. Our method can extract thousands of training labels of dialogue states with best-case precision exceeding 70%. Furthermore, we provide an in-depth analysis of training data memorization in LLM-based TODS by identifying and quantifying key influencing factors and discussing targeted mitigation strategies.


Key Contributions

  • Systematic quantitative study of training data extraction attacks on LLM-based Task-Oriented Dialogue Systems (TODS), identifying why existing methods are ineffective for this domain
  • Novel attack techniques tailored for LLM-based TODS that improve both response sampling strategies and membership inference to recover dialogue state labels
  • In-depth analysis of key factors influencing training data memorization in TODS and discussion of targeted mitigation strategies

🛡️ Threat Analysis

Model Inversion Attack

The primary contribution is training data reconstruction: an adversary queries deployed LLM-based task bots to extract memorized training dialogue records (phone numbers, travel schedules, dialogue state labels) with >70% precision — classic model inversion / training data extraction.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Datasets
MultiWOZSGD
Applications
task-oriented dialogue systemsconversational ai