Characterizing Selective Refusal Bias in Large Language Models
Adel Khorramrouz , Sharon Levy
Published on arXiv
2510.27087
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
LLM guardrails exhibit statistically significant selective refusal bias across demographic attributes, and an indirect attack exploiting this bias successfully increases harmful compliance for previously refused demographic groups.
Safety guardrails in large language models(LLMs) are developed to prevent malicious users from generating toxic content at a large scale. However, these measures can inadvertently introduce or reflect new biases, as LLMs may refuse to generate harmful content targeting some demographic groups and not others. We explore this selective refusal bias in LLM guardrails through the lens of refusal rates of targeted individual and intersectional demographic groups, types of LLM responses, and length of generated refusals. Our results show evidence of selective refusal bias across gender, sexual orientation, nationality, and religion attributes. This leads us to investigate additional safety implications via an indirect attack, where we target previously refused groups. Our findings emphasize the need for more equitable and robust performance in safety guardrails across demographic groups.
Key Contributions
- Demonstrates that LLM safety guardrails selectively refuse toxic prompts depending on the targeted demographic group (gender, sexual orientation, nationality, religion), revealing implicit bias in alignment training
- Characterizes refusal responses by type (full, partial, no refusal) and length across individual and intersectional demographic groups
- Proposes an indirect attack leveraging selective refusal bias to bypass guardrails and increase compliance with harmful prompts targeting previously refused groups