attack 2026

When Agents See Humans as the Outgroup: Belief-Dependent Bias in LLM-Powered Agents

Zongwei Wang 1, Bincheng Gu 1, Hongyu Yu 1, Junliang Yu 2, Tao He 3, Jiayin Feng 1, Chenghua Lin 4, Min Gao 1

0 citations · 34 references · arXiv

α

Published on arXiv

2601.00240

Prompt Injection

OWASP LLM Top 10 — LLM01

Excessive Agency

OWASP LLM Top 10 — LLM08

Key Finding

LLM agents consistently display intergroup bias that treats humans as outgroup when agent identity beliefs are uncertain or corrupted; BPA successfully induces this outgroup bias via profile and memory poisoning across multiple experimental settings.

Belief Poisoning Attack (BPA)

Novel technique introduced


This paper reveals that LLM-powered agents exhibit not only demographic bias (e.g., gender, religion) but also intergroup bias under minimal "us" versus "them" cues. When such group boundaries align with the agent-human divide, a new bias risk emerges: agents may treat other AI agents as the ingroup and humans as the outgroup. To examine this risk, we conduct a controlled multi-agent social simulation and find that agents display consistent intergroup bias in an all-agent setting. More critically, this bias persists even in human-facing interactions when agents are uncertain about whether the counterpart is truly human, revealing a belief-dependent fragility in bias suppression toward humans. Motivated by this observation, we identify a new attack surface rooted in identity beliefs and formalize a Belief Poisoning Attack (BPA) that can manipulate agent identity beliefs and induce outgroup bias toward humans. Extensive experiments demonstrate both the prevalence of agent intergroup bias and the severity of BPA across settings, while also showing that our proposed defenses can mitigate the risk. These findings are expected to inform safer agent design and motivate more robust safeguards for human-facing agents.


Key Contributions

  • Discovery and empirical demonstration that LLM agents exhibit belief-dependent intergroup bias, treating AI agents as ingroup and humans as outgroup under minimal group cues.
  • Formalization of the Belief Poisoning Attack (BPA) with two variants: BPA-PP (profile poisoning at initialization) and BPA-MP (memory poisoning via optimized belief-refinement suffixes injected into stored reflections).
  • Proposed defenses hardening agent profile and memory boundaries, demonstrated to mitigate BPA-induced outgroup bias toward humans.

🛡️ Threat Analysis


Details

Domains
nlp
Model Types
llm
Threat Tags
black_boxinference_timetargeted
Applications
multi-agent systemsllm-powered autonomous agentshuman-facing ai agents