CODE: A Contradiction-Based Deliberation Extension Framework for Overthinking Attacks on Retrieval-Augmented Generation
Xiaolei Zhang 1, Xiaojun Jia 2, Liquan Chen 1, Songze Li 1
Published on arXiv
2601.13112
Prompt Injection
OWASP LLM Top 10 — LLM01
Model Denial of Service
OWASP LLM Top 10 — LLM04
Key Finding
CODE causes a 5.32x–24.72x increase in reasoning token consumption across five commercial LLM families without degrading task accuracy, making the overhead highly stealthy.
CODE (Contradiction-Based Deliberation Extension)
Novel technique introduced
Introducing reasoning models into Retrieval-Augmented Generation (RAG) systems enhances task performance through step-by-step reasoning, logical consistency, and multi-step self-verification. However, recent studies have shown that reasoning models suffer from overthinking attacks, where models are tricked to generate unnecessarily high number of reasoning tokens. In this paper, we reveal that such overthinking risk can be inherited by RAG systems equipped with reasoning models, by proposing an end-to-end attack framework named Contradiction-Based Deliberation Extension (CODE). Specifically, CODE develops a multi-agent architecture to construct poisoning samples that are injected into the knowledge base. These samples 1) are highly correlated with the use query, such that can be retrieved as inputs to the reasoning model; and 2) contain contradiction between the logical and evidence layers that cause models to overthink, and are optimized to exhibit highly diverse styles. Moreover, the inference overhead of CODE is extremely difficult to detect, as no modification is needed on the user query, and the task accuracy remain unaffected. Extensive experiments on two datasets across five commercial reasoning models demonstrate that the proposed attack causes a 5.32x-24.72x increase in reasoning token consumption, without degrading task performance. Finally, we also discuss and evaluate potential countermeasures to mitigate overthinking risks.
Key Contributions
- End-to-end RAG knowledge-poisoning attack (CODE) that induces overthinking in downstream reasoning models without modifying user queries or model parameters
- Multi-agent architecture for constructing adversarial documents embedding cross-layer contradictions (logical vs. evidence layers) optimized for retrieval relevance and stylistic diversity
- Empirical demonstration of 5.32x–24.72x reasoning token increase across five commercial reasoning model families (DeepSeek, GPT, Qwen, Gemini) while maintaining full task accuracy