Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples

01/28/2023
by   Masahiro Kaneko, et al.
0

Numerous types of social biases have been identified in pre-trained language models (PLMs), and various intrinsic bias evaluation measures have been proposed for quantifying those social biases. Prior works have relied on human annotated examples to compare existing intrinsic bias evaluation measures. However, this approach is not easily adaptable to different languages nor amenable to large scale evaluations due to the costs and difficulties when recruiting human annotators. To overcome this limitation, we propose a method to compare intrinsic gender bias evaluation measures without relying on human-annotated examples. Specifically, we create multiple bias-controlled versions of PLMs using varying amounts of male vs. female gendered sentences, mined automatically from an unannotated corpus using gender-related word lists. Next, each bias-controlled PLM is evaluated using an intrinsic bias evaluation measure, and the rank correlation between the computed bias scores and the gender proportions used to fine-tune the PLMs is computed. Experiments on multiple corpora and PLMs repeatedly show that the correlations reported by our proposed method that does not require human annotated examples are comparable to those computed using human annotated examples in prior work.

READ FULL TEXT
research
10/06/2022

Debiasing isn't enough! – On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks

We study the relationship between task-agnostic intrinsic and task-speci...
research
09/13/2023

In-Contextual Bias Suppression for Large Language Models

Despite their impressive performance in a wide range of NLP tasks, Large...
research
09/18/2023

Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels

Discriminatory social biases, including gender biases, have been found i...
research
10/03/2021

Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models

Over the last few years, Contextualized Pre-trained Neural Language Mode...
research
09/10/2021

Assessing the Reliability of Word Embedding Gender Bias Measures

Various measures have been proposed to quantify human-like social biases...
research
07/17/2019

Decoding the Style and Bias of Song Lyrics

The central idea of this paper is to gain a deeper understanding of song...
research
12/20/2022

Trustworthy Social Bias Measurement

How do we design measures of social bias that we trust? While prior work...

Please sign up or login with your details

Forgot password? Click here to reset