LLM06
Sensitive Information Disclosure
LLMs leaking training data, PII, prompts
211 papers Browse all papers
Monthly publications
Paper types
defense 84
attack 61
benchmark 51
survey 10
tool 5
Domains
nlp 208
multimodal 18
vision 12
federated-learning 6
generative 5
graph 4
audio 2
tabular 1
Co-occurring categories
Other OWASP categories that appear on the same papers
ML03 Model Inversion Attack
72 LLM01 Prompt Injection
62 ML04 Membership Inference Attack
32 ML05 Model Theft
7 ML02 Data Poisoning Attack
6 LLM07 Insecure Plugin Design
6 LLM08 Excessive Agency
6 LLM03 Training Data Poisoning
3 ML01 Input Manipulation Attack
3 ML09 Output Integrity Attack
2 ML10 Model Poisoning
2 ML07 Transfer Learning Attack
1 ML06 AI Supply Chain Attacks
1Top cited papers
1152837465565758494104
Language Models are Injective and Hence Invertible
2025 attack
Eliciting Secret Knowledge from Language Models
2025 benchmark
Mitigating the OWASP Top 10 For Large Language Models Applications using Intelligent Agents
2026 defense
Hubble: a Model Suite to Advance the Study of LLM Memorization
2025 benchmark
You Can't Steal Nothing: Mitigating Prompt Leakages in LLMs via System Vectors
2025 defense
ConVerse: Benchmarking Contextual Safety in Agent-to-Agent Conversations
2025 benchmark
Extracting books from production language models
2026 attack
Extracting alignment data in open models
2025 attack
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
2025 defense
Confidential LLM Inference: Performance and Cost Across CPU and GPU TEEs
2025 benchmark