benchmark 2025

More Agents Helps but Adversarial Robustness Gap Persists

Khashayar Alavi 1,2, Zhastay Yeltay 1,2, Lucie Flek 1,2, Akbar Karimi 1,2

0 citations · 29 references · arXiv

α

Published on arXiv

2511.07112

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Increasing agent count from 1 to 25 raises average accuracy from 65.8% to 77.4%, but human-like typos (WikiTypo) cause an ~8% accuracy drop that persists across all agent counts, showing collaboration does not eliminate the adversarial robustness gap.

Agent Forest

Novel technique introduced


When LLM agents work together, they seem to be more powerful than a single LLM in mathematical question answering. However, are they also more robust to adversarial inputs? We investigate this question using adversarially perturbed math questions. These perturbations include punctuation noise with three intensities (10, 30, and 50 percent), plus real-world and human-like typos (WikiTypo, R2ATA). Using a unified sampling-and-voting framework (Agent Forest), we evaluate six open-source models (Qwen3-4B/14B, Llama3.1-8B, Mistral-7B, Gemma3-4B/12B) across four benchmarks (GSM8K, MATH, MMLU-Math, MultiArith), with various numbers of agents n from one to 25 (1, 2, 5, 10, 15, 20, 25). Our findings show that (1) Noise type matters: punctuation noise harm scales with its severity, and the human typos remain the dominant bottleneck, yielding the largest gaps to Clean accuracy and the highest ASR even with a large number of agents. And (2) Collaboration reliably improves accuracy as the number of agents, n, increases, with the largest gains from one to five agents and diminishing returns beyond 10 agents. However, the adversarial robustness gap persists regardless of the agent count.


Key Contributions

  • Systematic evaluation of adversarial robustness in multi-agent LLM systems (Agent Forest) under five noise types across six open-source models and four math benchmarks
  • Finding that multi-agent collaboration monotonically improves accuracy (0.6579 → 0.7740 from 1 to 25 agents) but does not close the adversarial robustness gap for any noise type
  • Taxonomy showing human-like typos (WikiTypo, R2ATA) are the dominant robustness bottleneck, consistently outpacing punctuation noise as the hardest perturbation type regardless of agent count

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timeuntargeteddigital
Datasets
GSM8KMATHMMLU-MathMultiArith
Applications
mathematical question answeringmulti-agent llm systems