Latest papers

3 papers
survey arXiv Dec 9, 2025 · Dec 2025

Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem

Shiva Gaire, Srijan Gyawali, Saroj Mishra et al. · Tribhuvan University · University of North Dakota +3 more

Surveys MCP ecosystem security: indirect prompt injection, tool poisoning, supply chain risks, and agentic alignment failures in LLM tool interfaces

AI Supply Chain Attacks Prompt Injection Insecure Plugin Design Excessive Agency nlp
8 citations PDF
defense arXiv Sep 17, 2025 · Sep 2025

Privacy Preserving In-Context-Learning Framework for Large Language Models

Bishnu Bhusal, Manoj Acharya, Ramneet Kaur et al. · University of Missouri · SRI International

Defends private in-context learning by applying differential privacy to aggregated token distributions, preventing adversarial extraction of sensitive prompt data

Sensitive Information Disclosure nlp
PDF Code
attack arXiv Aug 25, 2025 · Aug 2025

Attacking LLMs and AI Agents: Advertisement Embedding Attacks Against Large Language Models

Qiming Guo, Jinwen Tang, Xingran Huang · Texas A&M University · University of Missouri +1 more

Introduces Advertisement Embedding Attacks injecting covert ads or propaganda into LLM outputs via platform prompt hijacking and backdoored open-source checkpoints

Model Poisoning AI Supply Chain Attacks Prompt Injection nlp
PDF