attack 2026

Beyond Semantic Manipulation: Token-Space Attacks on Reward Models

Yuheng Zhang 1, Mingyue Huo 1, Minghao Zhu 2, Mengxue Zhang , Nan Jiang 1

0 citations

α

Published on arXiv

2604.02686

Input Manipulation Attack

OWASP ML Top 10 — ML01

Key Finding

TOMPA nearly doubles the reward of GPT-5 reference answers and outperforms them on 98.0% of prompts on Skywork-Reward-V2-Llama-3.1-8B

TOMPA (Token Mapping Perturbation Attack)

Novel technique introduced


Reward models (RMs) are widely used as optimization targets in reinforcement learning from human feedback (RLHF), yet they remain vulnerable to reward hacking. Existing attacks mainly operate within the semantic space, constructing human-readable adversarial outputs that exploit RM biases. In this work, we introduce a fundamentally different paradigm: Token Mapping Perturbation Attack (TOMPA), a framework that performs adversarial optimization directly in token space. By bypassing the standard decode-re-tokenize interface between the policy and the reward model, TOMPA enables the attack policy to optimize over raw token sequences rather than coherent natural language. Using only black-box scalar feedback, TOMPA automatically discovers non-linguistic token patterns that elicit extremely high rewards across multiple state-of-the-art RMs. Specifically, when targeting Skywork-Reward-V2-Llama-3.1-8B, TOMPA nearly doubles the reward of GPT-5 reference answers and outperforms them on 98.0% of prompts. Despite these high scores, the generated outputs degenerate into nonsensical text, revealing that RMs can be systematically exploited beyond the semantic regime and exposing a critical vulnerability in current RLHF pipelines.


Key Contributions

  • Novel token-space attack framework (TOMPA) that bypasses decode-re-tokenize interface to optimize raw token sequences
  • Demonstrates reward models can be exploited beyond semantic regime using only black-box scalar feedback
  • Achieves nearly 2x reward of GPT-5 reference answers on Skywork-Reward-V2-Llama-3.1-8B while generating nonsensical text

🛡️ Threat Analysis

Input Manipulation Attack

Direct adversarial optimization attack on reward models at inference time - manipulates token sequences to cause misclassification (high rewards for nonsensical outputs), operates in token space rather than semantic space.


Details

Domains
nlp
Model Types
llmtransformer
Threat Tags
black_boxinference_timetargeted
Applications
rlhf reward modelsreinforcement learning from human feedback