attack 2025

Selective Adversarial Attacks on LLM Benchmarks

Ivan Dubrovsky 1, Anastasia Orlova 1, Illarion Iov 1, Nina Gubina 1, Irena Gureeva 2, Alexey Zaytsev 2

0 citations · 52 references · arXiv

α

Published on arXiv

2510.13570

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Selective adversarial perturbations of MMLU questions can materially alter relative LLM rankings while leaving non-target models largely unaffected, demonstrating that benchmark leaderboards are vulnerable to targeted manipulation

Selective Adversarial Attack with surrogate-LLM pipeline

Novel technique introduced


Benchmarking outcomes increasingly govern trust, selection, and deployment of LLMs, yet these evaluations remain vulnerable to semantically equivalent adversarial perturbations. Prior work on adversarial robustness in NLP has emphasized text attacks that affect many models equally, leaving open the question of whether it is possible to selectively degrade or enhance performance while minimally affecting other models. We formalize this problem and study selective adversarial attacks on MMLU - a widely used benchmark designed to measure a language model's broad general knowledge and reasoning ability across different subjects. Using canonical attacks integrated into TextAttack framework, we introduce a protocol for selectivity assessment, develop a custom constraint to increase selectivity of attacks and propose a surrogate-LLM pipeline that generates selective perturbations. Empirically, we find that selective adversarial attacks exist and can materially alter relative rankings, challenging the fairness, reproducibility, and transparency of leaderboard-driven evaluation. Our results motivate perturbation-aware reporting and robustness diagnostics for LLM evaluation and demonstrate that even subtle edits can shift comparative judgments.


Key Contributions

  • First formalization of selectivity in adversarial NLP attacks: perturbations that degrade a specific target LLM on benchmarks while minimally affecting non-target models
  • Custom TextAttack selectivity constraint and surrogate-LLM white-box pipeline for generating selective adversarial perturbations without access to target model internals
  • Empirical demonstration on MMLU that selective attacks materially alter relative leaderboard rankings, motivating perturbation-aware benchmark reporting

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
grey_boxinference_timetargeteddigital
Datasets
MMLU
Applications
llm benchmark evaluationleaderboard ranking