Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models

06/30/2023
by   Harnoor Dhingra, et al.
0

Large Language Models (LLMs) are trained primarily on minimally processed web text, which exhibits the same wide range of social biases held by the humans who created that content. Consequently, text generated by LLMs can inadvertently perpetuate stereotypes towards marginalized groups, like the LGBTQIA+ community. In this paper, we perform a comparative study of how LLMs generate text describing people with different sexual identities. Analyzing bias in the text generated by an LLM using regard score shows measurable bias against queer people. We then show that a post-hoc method based on chain-of-thought prompting using SHAP analysis can increase the regard of the sentence, representing a promising approach towards debiasing the output of LLMs in this setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2019

The Woman Worked as a Babysitter: On Biases in Language Generation

We present a systematic study of biases in natural language generation (...
research
02/19/2022

Reward Modeling for Mitigating Toxicity in Transformer-based Language Models

Transformer-based language models are able to generate fluent text and b...
research
09/24/2022

Moral Mimicry: Large Language Models Produce Moral Rationalizations Tailored to Political Identity

Large Language Models (LLMs) have recently demonstrated impressive capab...
research
06/23/2022

A Disability Lens towards Biases in GPT-3 Generated Open-Ended Languages

Language models (LM) are becoming prevalent in many language-based appli...
research
02/14/2022

Analyzing whether workplace smoking bans can reduce the probability of smoking

The rapid increase of smoking-related diseases and deaths globally is dr...
research
05/09/2023

ChatGPT as a Text Simplification Tool to Remove Bias

The presence of specific linguistic signals particular to a certain sub-...
research
05/24/2023

In-Context Impersonation Reveals Large Language Models' Strengths and Biases

In everyday conversations, humans can take on different roles and adapt ...

Please sign up or login with your details

Forgot password? Click here to reset