defense 2026

IH-Challenge: A Training Dataset to Improve Instruction Hierarchy on Frontier LLMs

Chuan Guo , Juan Felipe Ceron Uribe , Sicheng Zhu , Christopher A. Choquette-Choo , Steph Lin , Nikhil Kandpal , Milad Nasr , Sam Toyer , Miles Wang , Yaodong Yu , Alex Beutel , Kai Xiao

0 citations

α

Published on arXiv

2603.10521

Prompt Injection

OWASP LLM Top 10 — LLM01

Sensitive Information Disclosure

OWASP LLM Top 10 — LLM06

Key Finding

Fine-tuning GPT-5-Mini on IH-Challenge improves instruction hierarchy robustness by +10% across 16 in-distribution, OOD, and human red-teaming benchmarks while reducing unsafe behavior from 6.6% to 0.7%

IH-Challenge

Novel technique introduced


Instruction hierarchy (IH) defines how LLMs prioritize system, developer, user, and tool instructions under conflict, providing a concrete, trust-ordered policy for resolving instruction conflicts. IH is key to defending against jailbreaks, system prompt extractions, and agentic prompt injections. However, robust IH behavior is difficult to train: IH failures can be confounded with instruction-following failures, conflicts can be nuanced, and models can learn shortcuts such as overrefusing. We introduce IH-Challenge, a reinforcement learning training dataset, to address these difficulties. Fine-tuning GPT-5-Mini on IH-Challenge with online adversarial example generation improves IH robustness by +10.0% on average across 16 in-distribution, out-of-distribution, and human red-teaming benchmarks (84.1% to 94.1%), reduces unsafe behavior from 6.6% to 0.7% while improving helpfulness on general safety evaluations, and saturates an internal static agentic prompt injection evaluation, with minimal capability regression. We release the IH-Challenge dataset (https://huggingface.co/datasets/openai/ih-challenge) to support future research on robust instruction hierarchy.


Key Contributions

  • IH-Challenge: a publicly released RL training dataset with online adversarial example generation targeting instruction hierarchy robustness
  • Training methodology that avoids IH-specific pitfalls (confounds with instruction-following failures, nuanced conflicts, overrefusal shortcuts)
  • +10% IH robustness improvement (84.1%→94.1%) on GPT-5-Mini across 16 benchmarks with unsafe behavior reduced from 6.6% to 0.7%

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
training_timeinference_time
Datasets
IH-Challenge
Applications
llm safetyagentic ai systemschatbots