Toward Gender-Inclusive Coreference Resolution

10/30/2019
by   Yang Trista Cao, et al.
0

Correctly resolving textual mentions of people fundamentally entails making inferences about those people. Such inferences raise the risk of systemic biases in coreference resolution systems, including biases that reinforce cis-normativity and can harm binary and non-binary trans (and cis) stakeholders. To be er understand such biases, we foreground nuanced conceptualizations of gender from sociology and sociolinguistics, and investigate where in the machine learning pipeline such biases can enter a system. We inspect many existing datasets for trans-exclusionary biases, and develop two new datasets for interrogating bias in crowd annotations and in existing coreference resolution systems. Through these studies, conducted on English text, we confirm that without acknowledging and building systems that recognize the complexity of gender, we will build systems that fail for: quality of service, stereotyping, and over- or under-representation.

READ FULL TEXT
research
04/25/2018

Gender Bias in Coreference Resolution

We present an empirical study of gender bias in coreference resolution s...
research
11/18/2020

Gender Transformation: Robustness of GenderDetection in Facial Recognition Systems with variation in Image Properties

In recent times, there have been increasing accusations on artificial in...
research
08/31/2023

The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages

Gender biases in language generation systems are challenging to mitigate...
research
07/01/2021

Identifying the Prevalence of Gender Biases among the Computing Organizations

We have designed an online survey to understand the status quo of four d...
research
08/29/2023

PronounFlow: A Hybrid Approach for Calibrating Pronouns in Sentences

Flip through any book or listen to any song lyrics, and you will come ac...
research
04/16/2020

ViBE: A Tool for Measuring and Mitigating Bias in Image Datasets

Machine learning models are known to perpetuate the biases present in th...
research
06/02/2023

NLPositionality: Characterizing Design Biases of Datasets and Models

Design biases in NLP systems, such as performance differences for differ...

Please sign up or login with your details

Forgot password? Click here to reset