attack 2025

Dynamics of Adversarial Attacks on Large Language Model-Based Search Engines

Xiyang Hu

3 citations · 38 references · arXiv

α

Published on arXiv

2501.00745

Prompt Injection

OWASP LLM Top 10 — LLM01

Key Finding

Cooperation among competing attackers is more likely when players are forward-looking (high discount rates), and defensive measures that reduce attack success rates can counterintuitively incentivize attacks in some equilibrium regimes.

Infinitely Repeated Prisoners' Dilemma (IRPD) attack model

Novel technique introduced


The increasing integration of Large Language Model (LLM) based search engines has transformed the landscape of information retrieval. However, these systems are vulnerable to adversarial attacks, especially ranking manipulation attacks, where attackers craft webpage content to manipulate the LLM's ranking and promote specific content, gaining an unfair advantage over competitors. In this paper, we study the dynamics of ranking manipulation attacks. We frame this problem as an Infinitely Repeated Prisoners' Dilemma, where multiple players strategically decide whether to cooperate or attack. We analyze the conditions under which cooperation can be sustained, identifying key factors such as attack costs, discount rates, attack success rates, and trigger strategies that influence player behavior. We identify tipping points in the system dynamics, demonstrating that cooperation is more likely to be sustained when players are forward-looking. However, from a defense perspective, we find that simply reducing attack success probabilities can, paradoxically, incentivize attacks under certain conditions. Furthermore, defensive measures to cap the upper bound of attack success rates may prove futile in some scenarios. These insights highlight the complexity of securing LLM-based systems. Our work provides a theoretical foundation and practical insights for understanding and mitigating their vulnerabilities, while emphasizing the importance of adaptive security strategies and thoughtful ecosystem design.


Key Contributions

  • Frames multi-player ranking manipulation attacks in LLM-based search as an Infinitely Repeated Prisoners' Dilemma, identifying conditions under which cooperation (non-attack) is sustained
  • Identifies tipping points in attack dynamics driven by attack costs, discount rates, and attack success rates
  • Reveals the counterintuitive finding that reducing attack success probabilities can paradoxically incentivize more attacks under certain parameter regimes

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
llm-based search enginesretrieval-augmented generation