Latest papers

8 papers
defense arXiv Mar 30, 2026 · 7d ago

Mitigating Backdoor Attacks in Federated Learning Using PPA and MiniMax Game Theory

Osama Wehbi, Sarhad Arisdakessian, Omar Abdel Wahab et al. · Polytechnique Montréal · Institut national de la recherche scientifique +2 more

Defends federated learning against backdoor attacks using reputation systems, game theory, and statistical analysis to reduce attack success to 1-11%

Model Poisoning Data Poisoning Attack visionfederated-learning
PDF
defense arXiv Mar 30, 2026 · 7d ago

FL-PBM: Pre-Training Backdoor Mitigation for Federated Learning

Osama Wehbi, Sarhad Arisdakessian, Omar Abdel Wahab et al. · Polytechnique Montréal · Khalifa University +2 more

Client-side defense that detects and blurs backdoored training data in federated learning using PCA and GMM clustering

Model Poisoning visionfederated-learning
PDF
benchmark arXiv Feb 12, 2026 · 7w ago

AgentLeak: A Full-Stack Benchmark for Privacy Leakage in Multi-Agent LLM Systems

Faouzi El Yagoubi, Godwin Badu-Marfo, Ranwa Al Mallah · Polytechnique Montréal

Benchmark revealing multi-agent LLM systems leak sensitive PII at 68.8% through inter-agent channels that output-only audits miss entirely

Sensitive Information Disclosure Excessive Agency nlp
PDF
defense arXiv Dec 30, 2025 · Dec 2025

DivQAT: Enhancing Robustness of Quantized Convolutional Neural Networks against Model Extraction Attacks

Kacem Khaled, Felipe Gohring de Magalhães, Gabriela Nicolescu · Polytechnique Montréal

Defends quantized CNNs from model extraction by embedding divergence-based defense directly into quantization-aware training

Model Theft vision
PDF
survey arXiv Dec 29, 2025 · Dec 2025

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

Armstrong Foundjem, Lionel Nganyewou Tidjon, Leuson Da Silva et al. · Polytechnique Montréal

Surveys 93 ML threats via multi-agent RAG, identifying jailbreaking, federated poisoning, diffusion backdoors, and supply-chain vulnerabilities as dominant TTPs

Model Poisoning AI Supply Chain Attacks Data Poisoning Attack Prompt Injection nlpmultimodalfederated-learninggenerativevision
1 citations PDF
defense arXiv Dec 6, 2025 · Dec 2025

Securing the Model Context Protocol: Defending LLMs Against Tool Poisoning and Adversarial Attacks

Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel et al. · Polytechnique Montréal · Concordia University +1 more

Defends LLM tool-use via MCP against tool-descriptor poisoning, shadowing, and rug-pull attacks using RSA signing and LLM-on-LLM vetting

Insecure Plugin Design Prompt Injection nlp
5 citations PDF
defense arXiv Oct 3, 2025 · Oct 2025

FocusAgent: Simple Yet Effective Ways of Trimming the Large Context of Web Agents

Imene Kerboua, Sahar Omidi Shayegan, Megh Thakkar et al. · LIRIS - CNRS · Esker +3 more

Defends LLM web agents against indirect prompt injection by pruning accessibility tree observations with a lightweight LLM retriever

Prompt Injection nlp
2 citations PDF
attack arXiv Oct 3, 2025 · Oct 2025

Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain

Léo Boisvert, Abhay Puri, Chandra Kiran Reddy Evuru et al. · ServiceNow Research · Mila - Québec AI Institute +2 more

Backdoors injected via AI supply chain poisoning cause agents to leak confidential data with 80%+ success at 2% poison rate

Model Poisoning AI Supply Chain Attacks nlp
2 citations PDF