Foutse Khomh

h-index: 48 9,410 citations 415 papers (total)

Papers in Database (2)

defense arXiv Dec 6, 2025 · Dec 2025

Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks

Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel et al. · Polytechnique Montréal · Concordia University +1 more

Defends LLM tool-use via MCP against tool-descriptor poisoning, shadowing, and rug-pull attacks using RSA signing and LLM-on-LLM vetting

Insecure Plugin Design Prompt Injection nlp
5 citations PDF
survey arXiv Dec 29, 2025 · Dec 2025

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

Armstrong Foundjem, Lionel Nganyewou Tidjon, Leuson Da Silva et al. · Polytechnique Montréal

Surveys 93 ML threats via multi-agent RAG, identifying jailbreaking, federated poisoning, diffusion backdoors, and supply-chain vulnerabilities as dominant TTPs

Model Poisoning AI Supply Chain Attacks Data Poisoning Attack Prompt Injection nlpmultimodalfederated-learninggenerativevision
1 citations PDF