ML02
Data Poisoning Attack
Poisoning training data to compromise ML models
291 papers Browse all papers
Monthly publications
Paper types
defense 157
attack 95
survey 18
benchmark 17
tool 4
Domains
federated-learning 133
nlp 102
vision 92
multimodal 18
graph 18
tabular 17
reinforcement-learning 15
generative 13
timeseries 7
audio 4
Co-occurring categories
Other OWASP categories that appear on the same papers
ML10 Model Poisoning
67 LLM03 Training Data Poisoning
38 LLM01 Prompt Injection
27 ML01 Input Manipulation Attack
25 ML03 Model Inversion Attack
22 ML09 Output Integrity Attack
9 LLM06 Sensitive Information Disclosure
6 ML07 Transfer Learning Attack
5 ML06 AI Supply Chain Attacks
4 ML04 Membership Inference Attack
3 LLM08 Excessive Agency
2 ML08 Model Skewing
2 ML05 Model Theft
1 LLM07 Insecure Plugin Design
1Top cited papers
1322273104105665758493103
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
2025 attack
A Survey of Secure Semantic Communications
2025 survey
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs
2025 attack
TrustRAG: Enhancing Robustness and Trustworthiness in Retrieval-Augmented Generation
2025 defense
Secure Retrieval-Augmented Generation against Poisoning Attacks
2025 defense
FAPL-DM-BC: A Secure and Scalable FL Framework with Adaptive Privacy and Dynamic Masking, Blockchain, and XAI for the IoVs
2025 defense
Adaptive Defense against Harmful Fine-Tuning for Large Language Models via Bayesian Data Scheduler
2025 defense
MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
2025 attack
SoK: Systematic analysis of adversarial threats against deep learning approaches for autonomous anomaly detection systems in SDN-IoT networks
2025 survey
Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning
2025 defense