MISGENDERED: Limits of Large Language Models in Understanding Pronouns

06/06/2023
by   Tamanna Hossain, et al.
0

Content Warning: This paper contains examples of misgendering and erasure that could be offensive and potentially triggering. Gender bias in language technologies has been widely studied, but research has mostly been restricted to a binary paradigm of gender. It is essential also to consider non-binary gender identities, as excluding them can cause further harm to an already marginalized group. In this paper, we comprehensively evaluate popular language models for their ability to correctly use English gender-neutral pronouns (e.g., singular they, them) and neo-pronouns (e.g., ze, xe, thon) that are used by individuals whose gender identity is not represented by binary pronouns. We introduce MISGENDERED, a framework for evaluating large language models' ability to correctly use preferred pronouns, consisting of (i) instances declaring an individual's pronoun, followed by a sentence with a missing pronoun, and (ii) an experimental setup for evaluating masked and auto-regressive language models using a unified method. When prompted out-of-the-box, language models perform poorly at correctly predicting neo-pronouns (averaging 7.6 31.0 representation of non-binary pronouns in training data and memorized associations. Few-shot adaptation with explicit examples in the prompt improves the performance but plateaus at only 45.4 full dataset, code, and demo at https://tamannahossainkay.github.io/misgendered/

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/27/2021

Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies

Gender is widely discussed in the context of language tasks and when exa...
research
06/21/2023

VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution

We introduce VisoGender, a novel dataset for benchmarking gender bias in...
research
05/17/2023

"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation

Transgender and non-binary (TGNB) individuals disproportionately experie...
research
04/03/2023

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

How do large language models (LLMs) develop and evolve over the course o...
research
07/21/2022

The Birth of Bias: A case study on the evolution of gender bias in an English language model

Detecting and mitigating harmful biases in modern language models are wi...
research
03/17/2023

She Elicits Requirements and He Tests: Software Engineering Gender Bias in Large Language Models

Implicit gender bias in software development is a well-documented issue,...
research
05/26/2023

Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models

Cutting-edge image generation has been praised for producing high-qualit...

Please sign up or login with your details

Forgot password? Click here to reset