attack 2025

Many-to-One Adversarial Consensus: Exposing Multi-Agent Collusion Risks in AI-Based Healthcare

Adeela Bashir , Anh han , Zia Ush Shamszaman

0 citations · 26 references · arXiv

α

Published on arXiv

2512.03097

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

Adversarial assistant collusion drives Attack Success Rate and Harmful Recommendation Rate to 100% in unprotected systems, while a verifier agent restores 100% accuracy by blocking adversarial consensus.

Many-to-One Adversarial Consensus (Collusion Attack)

Novel technique introduced


The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also deployed as multi-agent teams to assist AI doctors by debating, voting, or advising on decisions. However, when multiple assistant agents interact, coordinated adversaries can collude to create false consensus, pushing an AI doctor toward harmful prescriptions. We develop an experimental framework with scripted and unscripted doctor agents, adversarial assistants, and a verifier agent that checks decisions against clinical guidelines. Using 50 representative clinical questions, we find that collusion drives the Attack Success Rate (ASR) and Harmful Recommendation Rates (HRR) up to 100% in unprotected systems. In contrast, the verifier agent restores 100% accuracy by blocking adversarial consensus. This work provides the first systematic evidence of collusion risk in AI healthcare and demonstrates a practical, lightweight defence that ensures guideline fidelity.


Key Contributions

  • First systematic study of many-to-one collusion attacks in LLM-based multi-agent healthcare systems, demonstrating ASR and HRR up to 100% under adversarial consensus pressure
  • Experimental framework with scripted/unscripted doctor agents, adversarial assistant agents, and 50 representative clinical questions for evaluating collusion risk
  • Verifier agent defense that checks committee recommendations against clinical guidelines, restoring 100% accuracy against collusion attacks

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Datasets
50 representative clinical questions (custom)
Applications
clinical decision supporthealthcare aimedical recommendation systemshealthcare iot