“Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text

Rishav Hada, Agrima Seth, Harshita Diddee, Kalika Bali


Abstract
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best–Worst Scaling – an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.
Anthology ID:
2023.emnlp-main.115
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1862–1876
Language:
URL:
https://aclanthology.org/2023.emnlp-main.115
DOI:
10.18653/v1/2023.emnlp-main.115
Bibkey:
Cite (ACL):
Rishav Hada, Agrima Seth, Harshita Diddee, and Kalika Bali. 2023. “Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1862–1876, Singapore. Association for Computational Linguistics.
Cite (Informal):
“Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text (Hada et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.115.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.115.mp4