benchmark 2026

Consistency of Large Reasoning Models Under Multi-Turn Attacks

Yubo Li , Ramayya Krishnan , Rema Padman

0 citations · 35 references · arXiv (Cornell University)

α

Published on arXiv

2602.13093

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Confidence poorly predicts correctness in reasoning models (r = -0.08, ROC-AUC = 0.54), causing CARG defenses to fail; misleading suggestions are universally effective across all nine evaluated models.

CARG (Confidence-Aware Response Generation)

Novel technique introduced


Large reasoning models with reasoning capabilities achieve state-of-the-art performance on complex tasks, but their robustness under multi-turn adversarial pressure remains underexplored. We evaluate nine frontier reasoning models under adversarial attacks. Our findings reveal that reasoning confers meaningful but incomplete robustness: most reasoning models studied significantly outperform instruction-tuned baselines, yet all exhibit distinct vulnerability profiles, with misleading suggestions universally effective and social pressure showing model-specific efficacy. Through trajectory analysis, we identify five failure modes (Self-Doubt, Social Conformity, Suggestion Hijacking, Emotional Susceptibility, and Reasoning Fatigue) with the first two accounting for 50% of failures. We further demonstrate that Confidence-Aware Response Generation (CARG), effective for standard LLMs, fails for reasoning models due to overconfidence induced by extended reasoning traces; counterintuitively, random confidence embedding outperforms targeted extraction. Our results highlight that reasoning capabilities do not automatically confer adversarial robustness and that confidence-based defenses require fundamental redesign for reasoning models.


Key Contributions

  • Systematic evaluation of 9 frontier reasoning models under multi-turn adversarial attacks, showing reasoning confers meaningful but incomplete robustness (8/9 outperform instruction-tuned baselines)
  • Taxonomy of five failure modes (Self-Doubt, Social Conformity, Suggestion Hijacking, Emotional Susceptibility, Reasoning Fatigue), with the first two accounting for 50% of failures
  • Demonstration that Confidence-Aware Response Generation (CARG) fails for reasoning models due to overconfidence from extended reasoning traces, with random confidence embedding counterintuitively outperforming targeted extraction

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
inference_timeblack_box
Applications
conversational aiquestion answeringhigh-stakes llm deployment