The Gap on GAP: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

11/03/2020
by   Vid Kocijan, et al.
0

Diagnostic datasets that can detect biased models are an important prerequisite for bias reduction within natural language processing. However, undesired patterns in the collected data can make such tests incorrect. For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies. In this work, we introduce a theoretically grounded method for weighting test samples to cope with such patterns in the test data. We demonstrate the method on the GAP dataset for coreference resolution. We annotate GAP with spans of all personal names and show that examples in the female subset contain more personal names and a longer distance between pronouns and their referents, potentially affecting the bias score in an undesired way. Using our weighting method, we find the set of weights on the test instances that should be used for coping with these correlations, and we re-evaluate 16 recently released coreference models.

READ FULL TEXT
research
05/23/2023

Run Like a Girl! Sports-Related Gender Bias in Language and Vision

Gender bias in Language and Vision datasets and models has the potential...
research
08/14/2019

Debiasing Personal Identities in Toxicity Classification

As Machine Learning models continue to be relied upon for making automat...
research
02/11/2023

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

Bias-measuring datasets play a critical role in detecting biased behavio...
research
08/31/2021

A Generative Approach for Mitigating Structural Biases in Natural Language Inference

Many natural language inference (NLI) datasets contain biases that allow...
research
09/28/2021

Second Order WinoBias (SoWinoBias) Test Set for Latent Gender Bias Detection in Coreference Resolution

We observe an instance of gender-induced bias in a downstream applicatio...
research
02/16/2022

Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective

Natural language understanding (NLU) models tend to rely on spurious cor...

Please sign up or login with your details

Forgot password? Click here to reset