benchmark 2026

OI-Bench: An Option Injection Benchmark for Evaluating LLM Susceptibility to Directive Interference

Yow-Fu Liou , Yu-Chien Tang , Yu-Hsiang Liu , An-Zi Yen

0 citations · 52 references · arXiv

α

Published on arXiv

2601.13300

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

A single misleading directive injected as an extra MCQA option can substantially flip LLM decisions, with threat-based framing yielding the highest attack success rates and significant heterogeneity in robustness across 12 evaluated LLMs.

OI-Bench (Option Injection Benchmark)

Novel technique introduced


Benchmarking large language models (LLMs) is critical for understanding their capabilities, limitations, and robustness. In addition to interface artifacts, prior studies have shown that LLM decisions can be influenced by directive signals such as social cues, framing, and instructions. In this work, we introduce option injection, a benchmarking approach that augments the multiple-choice question answering (MCQA) interface with an additional option containing a misleading directive, leveraging standardized choice structure and scalable evaluation. We construct OI-Bench, a benchmark of 3,000 questions spanning knowledge, reasoning, and commonsense tasks, with 16 directive types covering social compliance, bonus framing, threat framing, and instructional interference. This setting combines manipulation of the choice interface with directive-based interference, enabling systematic assessment of model susceptibility. We evaluate 12 LLMs to analyze attack success rates, behavioral responses, and further investigate mitigation strategies ranging from inference-time prompting to post-training alignment. Experimental results reveal substantial vulnerabilities and heterogeneous robustness across models. OI-Bench is expected to support more systematic evaluation of LLM robustness to directive interference within choice-based interfaces.


Key Contributions

  • Introduces 'option injection' — augmenting MCQA interfaces with an irrelevant Option E containing misleading directives to probe LLM directive susceptibility
  • Constructs OI-Bench: 3,000 questions from MMLU, LogiQA, and HellaSwag with 16 directive types across 4 families (social compliance, bonus framing, threat framing, instructional interference)
  • Evaluates 12 LLMs revealing substantial vulnerabilities and heterogeneous robustness, finding strong standard accuracy does not guarantee resilience to directive injection, and analyzes inference-time and post-training mitigation strategies

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
inference_timeblack_boxdigital
Datasets
MMLULogiQAHellaSwag
Applications
llm benchmarkingmultiple-choice question answeringllm-as-a-judge