A Robust Bias Mitigation Procedure Based on the Stereotype Content Model

10/26/2022
by   Eddie L. Ungless, et al.
0

The Stereotype Content model (SCM) states that we tend to perceive minority groups as cold, incompetent or both. In this paper we adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings, then use these results to evaluate a fine-tuning process designed to drive a language model away from stereotyped portrayals of minority groups. We find the SCM terms are better able to capture bias than demographic agnostic terms related to pleasantness. Further, we were able to reduce the presence of stereotypes in the model through a simple fine-tuning procedure that required minimal human and computer resources, without harming downstream performance. We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/22/2018

Reducing Gender Bias in Abusive Language Detection

Abusive language detection models tend to have a problem of being biased...
research
10/24/2020

Efficiently Mitigating Classification Bias via Transfer Learning

Prediction bias in machine learning models refers to unintended model be...
research
04/08/2022

Fair and Argumentative Language Modeling for Computational Argumentation

Although much work in NLP has focused on measuring and mitigating stereo...
research
05/23/2022

Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements

The growing capability and availability of generative language models ha...
research
10/07/2022

A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter

Harmful content detection models tend to have higher false positive rate...
research
06/02/2021

Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning

A growing body of literature has focused on detailing the linguistic kno...
research
05/18/2023

In the Name of Fairness: Assessing the Bias in Clinical Record De-identification

Data sharing is crucial for open science and reproducible research, but ...

Please sign up or login with your details

Forgot password? Click here to reset