CoBia: Constructed Conversations Can Trigger Otherwise Concealed Societal Biases in LLMs
Nafiseh Nikeghbal 1, Amir Hossein Kargaran 2,3, Jana Diesner 1
Published on arXiv
2510.09871
Prompt Injection
OWASP LLM Top 10 — LLM01
Key Finding
Purposefully constructed conversations reliably trigger bias amplification in LLMs across all 11 evaluated models, with models frequently failing to reject biased follow-up questions even after uttering the fabricated biased claim.
CoBia (Constructed Bias)
Novel technique introduced
Improvements in model construction, including fortified safety guardrails, allow Large language models (LLMs) to increasingly pass standard safety checks. However, LLMs sometimes slip into revealing harmful behavior, such as expressing racist viewpoints, during conversations. To analyze this systematically, we introduce CoBia, a suite of lightweight adversarial attacks that allow us to refine the scope of conditions under which LLMs depart from normative or ethical behavior in conversations. CoBia creates a constructed conversation where the model utters a biased claim about a social group. We then evaluate whether the model can recover from the fabricated bias claim and reject biased follow-up questions. We evaluate 11 open-source as well as proprietary LLMs for their outputs related to six socio-demographic categories that are relevant to individual safety and fair treatment, i.e., gender, race, religion, nationality, sex orientation, and others. Our evaluation is based on established LLM-based bias metrics, and we compare the results against human judgments to scope out the LLMs' reliability and alignment. The results suggest that purposefully constructed conversations reliably reveal bias amplification and that LLMs often fail to reject biased follow-up questions during dialogue. This form of stress-testing highlights deeply embedded biases that can be surfaced through interaction. Code and artifacts are available at https://github.com/nafisenik/CoBia.
Key Contributions
- CoBia: a suite of lightweight constructed-conversation adversarial attacks that expose hidden societal biases in LLMs with a single query by fabricating a biased prior turn in the conversation history
- CoBiaD dataset: 112 social groups with curated negative descriptors across six socio-demographic categories (gender, race, religion, nationality, sexual orientation, others), derived from RedditBias, SBIC, and StereoSet
- Comparative evaluation of 11 open-source and proprietary LLMs for bias amplification using automated LLM-based judges and human annotations across all six categories