Are LLMs Reliable Rankers? Rank Manipulation via Two-Stage Token Optimization
Tiancheng Xing 1, Jerry Li 2, Yixuan Du 3, Xiyang Hu 4
Published on arXiv
2510.06732
Input Manipulation Attack
OWASP ML Top 10 — ML01
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
RAF consistently promotes target items in LLM-generated rankings using naturalistic adversarial text, outperforming existing rank-manipulation methods in both effectiveness and detectability resistance across multiple LLMs.
RAF (Rank Anything First)
Novel technique introduced
Large language models (LLMs) are increasingly used as rerankers in information retrieval, yet their ranking behavior can be steered by small, natural-sounding prompts. To expose this vulnerability, we present Rank Anything First (RAF), a two-stage token optimization method that crafts concise textual perturbations to consistently promote a target item in LLM-generated rankings while remaining hard to detect. Stage 1 uses Greedy Coordinate Gradient to shortlist candidate tokens at the current position by combining the gradient of the rank-target with a readability score; Stage 2 evaluates those candidates under exact ranking and readability losses using an entropy-based dynamic weighting scheme, and selects a token via temperature-controlled sampling. RAF generates ranking-promoting prompts token-by-token, guided by dual objectives: maximizing ranking effectiveness and preserving linguistic naturalness. Experiments across multiple LLMs show that RAF significantly boosts the rank of target items using naturalistic language, with greater robustness than existing methods in both promoting target items and maintaining naturalness. These findings underscore a critical security implication: LLM-based reranking is inherently susceptible to adversarial manipulation, raising new challenges for the trustworthiness and robustness of modern retrieval systems. Our code is available at: https://github.com/glad-lab/RAF.
Key Contributions
- Two-stage token optimization (Stage 1: GCG-based candidate shortlisting with readability scoring; Stage 2: entropy-weighted dual-objective evaluation with temperature-controlled sampling) that generates naturalistic adversarial rank-promotion prompts
- Dual-objective formulation balancing ranking effectiveness and linguistic naturalness via an entropy-based dynamic weighting scheme
- Empirical demonstration that LLM-based rerankers are systematically vulnerable to gradient-crafted adversarial text insertions, outperforming prior methods in both promotion success and naturalness
🛡️ Threat Analysis
RAF uses Greedy Coordinate Gradient (GCG), a gradient-based adversarial token optimization method, to craft adversarial text perturbations — this is squarely adversarial suffix/token optimization at inference time, the core of ML01. The method directly parallels adversarial SEO poisoning for LLM-integrated retrieval systems, which the ML01 dual-tagging guidance explicitly covers.