Papers
arxiv:2604.23600

Personality Shapes Gender Bias in Persona-Conditioned LLM Narratives Across English and Hindi: An Empirical Investigation

Published on Apr 26
· Submitted by
Aman Chadha
on Apr 28
Authors:
,
,
,

Abstract

Persona-conditioned large language models exhibit context-dependent gender bias that varies with personality trait frameworks and across languages.

AI-generated summary

Large Language Models (LLMs) are increasingly deployed in persona-driven applications such as education, customer service, and social platforms, where models are prompted to adopt specific personas when interacting with users. While persona conditioning can improve user experience and engagement, it also raises concerns about how personality cues may interact with gender biases and stereotypes. In this work, we present a controlled study of persona-conditioned story generation in English and Hindi, where each story portrays a working professional in India producing context-specific artifacts (e.g., lesson plans, reports, letters) under systematically varied persona gender, occupational role, and personality traits from the HEXACO and Dark Triad frameworks. Across 23,400 generated stories from six state-of-the-art LLMs, we find that personality traits are significantly associated with both the magnitude and direction of gender bias. In particular, Dark Triad personality traits are consistently associated with higher gender-stereotypical representations compared to socially desirable HEXACO traits, though these associations vary across models and languages. Our findings demonstrate that gender bias in LLMs is not static but context-dependent. This suggests that persona-conditioned systems used in real-world applications may introduce uneven representational harms, reinforcing gender stereotypes in generated educational, professional, or social content.

Community

Paper author Paper submitter

This paper introduces a multilingual, persona-conditioned bias evaluation framework showing that personality traits act as systematic modulators of gender bias in LLM narrative generation, with Dark Triad traits amplifying stereotypes, HEXACO traits partially attenuating them, and these effects often exceeding the influence of explicit gender labels.

➡️ 𝐊𝐞𝐲 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬 𝐨𝐟 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐭𝐲-𝐂𝐨𝐧𝐝𝐢𝐭𝐢𝐨𝐧𝐞𝐝 𝐁𝐢𝐚𝐬 𝐌𝐨𝐝𝐮𝐥𝐚𝐭𝐢𝐨𝐧 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤:

🧪 𝑷𝒆𝒓𝒔𝒐𝒏𝒂𝒍𝒊𝒕𝒚 𝒂𝒔 𝒂 𝑭𝒂𝒊𝒓𝒏𝒆𝒔𝒔-𝑪𝒓𝒊𝒕𝒊𝒄𝒂𝒍 𝑪𝒐𝒏𝒕𝒓𝒐𝒍 𝑽𝒂𝒓𝒊𝒂𝒃𝒍𝒆:
Introduces a controlled 23,400-artifact multilingual benchmark spanning 6 model families (LLM, MoE, SSM, LRM, SLM), 50 occupations, 9 personality traits × 2 levels, 3 gender conditions, and English/Hindi, enabling systematic measurement of personality–gender interaction effects in generation bias. Novel finding: gender bias is conditional, not static—Dark Triad traits (especially Machiavellianism, Psychopathy) amplify stereotypical outputs, while Openness/Emotionality attenuate them.

🧩 𝑪𝒆𝒏𝒕𝒓𝒐𝒊𝒅-𝑩𝒂𝒔𝒆𝒅 𝑺𝒆𝒎𝒂𝒏𝒕𝒊𝒄 𝑩𝒊𝒂𝒔 𝑴𝒆𝒕𝒓𝒊𝒄 𝒇𝒐𝒓 𝑷𝒆𝒓𝒔𝒐𝒏𝒂-𝑪𝒐𝒏𝒅𝒊𝒕𝒊𝒐𝒏𝒆𝒅 𝑵𝒂𝒓𝒓𝒂𝒕𝒊𝒗𝒆𝒔:
Proposes a sentence-level stereotype centroid scoring framework using multilingual SBERT embeddings and a difference-of-cosines bias score against male/female stereotype centroids, aggregated via maximum salience over narrative sentences. This moves beyond benchmark-style classification by localizing where stereotypes emerge inside generated artifacts. Human validation (κ≈0.66–0.69) supports the metric’s alignment with perceived stereotyping.

🧠 𝑴𝒖𝒍𝒕𝒊𝒍𝒊𝒏𝒈𝒖𝒂𝒍 𝑷𝒆𝒓𝒔𝒐𝒏𝒂𝒍𝒊𝒕𝒚–𝑮𝒆𝒏𝒅𝒆𝒓 𝑰𝒏𝒕𝒆𝒓𝒂𝒄𝒕𝒊𝒐𝒏 𝑨𝒏𝒂𝒍𝒚𝒔𝒊𝒔:
Shows that personality coefficients can exceed explicit gender effects, reframing prompt persona design as an alignment problem rather than prompt styling. Reveals cross-linguistic asymmetry: Hindi exhibits stronger baseline male-stereotypical skew, while English shows stronger personality-driven modulation. Importantly, the directional pattern generalizes across architectures, suggesting persona-induced bias amplification may be a broad property of LLMs rather than model-specific behavior.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.23600
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.23600 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.23600 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.23600 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.
Free AI Image Generator No sign-up. Instant results. Open Now