Debiasing Personal Identities in Toxicity Classification

08/14/2019
by   Apik Ashod Zorian, et al.
0

As Machine Learning models continue to be relied upon for making automated decisions, the issue of model bias becomes more and more prevalent. In this paper, we approach training a text classifica-tion model and optimize on bias minimization by measuring not only the models performance on our dataset as a whole, but also how it performs across different subgroups. This requires measuring per-formance independently for different demographic subgroups and measuring bias by comparing them to results from the rest of our data. We show how unintended bias can be detected using these metrics and how removing bias from a dataset completely can result in worse results.

READ FULL TEXT
research
03/11/2019

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

Unintended bias in Machine Learning can manifest as systemic differences...
research
07/03/2022

Counterfactually Measuring and Eliminating Social Bias in Vision-Language Pre-training Models

Vision-Language Pre-training (VLP) models have achieved state-of-the-art...
research
11/03/2020

The Gap on GAP: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

Diagnostic datasets that can detect biased models are an important prere...
research
05/18/2022

"I'm sorry to hear that": finding bias in language models with a holistic descriptor dataset

As language models grow in popularity, their biases across all possible ...
research
07/01/2021

Towards Measuring Bias in Image Classification

Convolutional Neural Networks (CNN) have become de fact state-of-the-art...
research
08/28/2020

The Identity Fragmentation Bias

Consumers interact with firms across multiple devices, browsers, and mac...
research
02/24/2021

Directional Bias Amplification

Mitigating bias in machine learning systems requires refining our unders...

Please sign up or login with your details

Forgot password? Click here to reset