Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection
Yuansen Liu , Yixuan Tang , Anthony Kum Hoe Tun
Published on arXiv
2601.10294
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Criteria Attack achieves high attack success rates on LLM classifiers across all tested tasks while bypassing intent-based prompt injection defenses that suppress traditional Goal Hijacking.
Criteria Attack
Novel technique introduced
Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We propose a new adversarial paradigm: Reasoning Hijacking and instantiate it with Criteria Attack, which subverts model judgments by injecting spurious decision criteria without altering the high-level task goal. Unlike Goal Hijacking, which attempts to override the system prompt, Reasoning Hijacking accepts the high-level goal but manipulates the model's decision-making logic by injecting spurious reasoning shortcut. Though extensive experiments on three different tasks (toxic comment, negative review, and spam detection), we demonstrate that even newest models are prone to prioritize injected heuristic shortcuts over rigorous semantic analysis. The results are consistent over different backbones. Crucially, because the model's "intent" remains aligned with the user's instructions, these attacks can bypass defenses designed to detect goal deviation (e.g., SecAlign, StruQ), exposing a fundamental blind spot in the current safety landscape. Data and code are available at https://github.com/Yuan-Hou/criteria_attack
Key Contributions
- Introduces the Reasoning Hijacking threat model: task goal is preserved but intermediate decision logic is subverted via injected criteria, producing label flips without explicit instruction conflict
- Proposes Criteria Attack, an automated indirect prompt injection method that mines refutable decision criteria and compiles them into a spurious reasoning scaffold embedded in the data channel
- Demonstrates that Reasoning Hijacking achieves high attack success rates across three classification tasks and multiple LLM backbones while evading defenses designed to detect goal deviation (SecAlign, StruQ)