survey 2025

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

Armstrong Foundjem , Lionel Nganyewou Tidjon , Leuson Da Silva , Foutse Khomh

1 citations · arXiv

α

Published on arXiv

2512.23132

Model Poisoning

OWASP ML Top 10 — ML10

AI Supply Chain Attacks

OWASP ML Top 10 — ML06

Data Poisoning Attack

OWASP ML Top 10 — ML02

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

MASTERKEY jailbreaking, federated poisoning, diffusion backdoors, and preference-guided optimization leakage emerge as dominant TTPs concentrated at pre-training and inference stages, with ML library dependency graphs exhibiting dense vulnerability clusters lacking effective patch propagation.


Machine learning (ML) underpins foundation models in finance, healthcare, and critical infrastructure, making them targets for data poisoning, model extraction, prompt injection, automated jailbreaking, and preference-guided black-box attacks that exploit model comparisons. Larger models can be more vulnerable to introspection-driven jailbreaks and cross-modal manipulation. Traditional cybersecurity lacks ML-specific threat modeling for foundation, multimodal, and RAG systems. Objective: Characterize ML security risks by identifying dominant TTPs, vulnerabilities, and targeted lifecycle stages. Methods: We extract 93 threats from MITRE ATLAS (26), AI Incident Database (12), and literature (55), and analyze 854 GitHub/Python repositories. A multi-agent RAG system (ChatGPT-4o, temp 0.4) mines 300+ articles to build an ontology-driven threat graph linking TTPs, vulnerabilities, and stages. Results: We identify unreported threats including commercial LLM API model stealing, parameter memorization leakage, and preference-guided text-only jailbreaks. Dominant TTPs include MASTERKEY-style jailbreaking, federated poisoning, diffusion backdoors, and preference optimization leakage, mainly impacting pre-training and inference. Graph analysis reveals dense vulnerability clusters in libraries with poor patch propagation. Conclusion: Adaptive, ML-specific security frameworks, combining dependency hygiene, threat intelligence, and monitoring, are essential to mitigate supply-chain and inference risks across the ML lifecycle.


Key Contributions

  • Large-scale empirical threat characterization extracting 93 ML security threats from MITRE ATLAS, AI Incident Database, and literature, supplemented by analysis of 854 ML repositories.
  • Multi-agent RAG system (ChatGPT-4o) that automatically mines 300+ papers to construct an ontology-driven threat graph linking TTPs, vulnerabilities, and lifecycle stages.
  • Identification of previously unreported threats including commercial LLM API model stealing, parameter memorization leakage, and preference-guided text-only jailbreaks as emerging dominant TTPs.

🛡️ Threat Analysis

Data Poisoning Attack

Federated learning poisoning and training data poisoning are identified as dominant TTPs disproportionately impacting the pre-training stage, with dedicated analysis of poisoning-based attack vectors across federated and centralized settings.

AI Supply Chain Attacks

Analysis of 854 GitHub/Python repositories reveals dense vulnerability clusters in ML libraries with poor patch-propagation; the paper explicitly links model hubs and library supply chains to attack vectors including trojaned pre-trained models.

Model Poisoning

Diffusion backdoor embedding and federated learning backdoor insertion are explicitly identified as dominant TTPs; the paper characterizes these trigger-based hidden-behavior attacks as among the most impactful across the ML lifecycle.


Details

Domains
nlpmultimodalfederated-learninggenerativevision
Model Types
llmvlmdiffusionfederatedtransformer
Threat Tags
black_boxtraining_timeinference_time
Datasets
MITRE ATLASAI Incident DatabaseGitHub repositoriesPython Advisory Database
Applications
foundation modelsrag systemsmultimodal aifederated learning systemslarge language models