What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

04/10/2019
by   Alexey Romanov, et al.
12

There is a growing body of work that proposes methods for mitigating bias in machine learning systems. These methods typically rely on access to protected attributes such as race, gender, or age. However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections. In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name. This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes. Crucially, it only requires access to individuals' names at training time and not at deployment time. We evaluate two variations of our proposed method using a large-scale dataset of online biographies. We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier's overall true positive rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2021

xFAIR: Better Fairness via Model-based Rebalancing of Protected Attributes

Machine learning software can generate models that inappropriately discr...
research
09/21/2021

Evaluating Debiasing Techniques for Intersectional Biases

Bias is pervasive in NLP models, motivating the development of automatic...
research
06/09/2022

Unlearning Protected User Attributes in Recommendations with Adversarial Training

Collaborative filtering algorithms capture underlying consumption patter...
research
06/16/2018

Right for the Right Reason: Training Agnostic Networks

We consider the problem of a neural network being requested to classify ...
research
12/11/2014

Certifying and removing disparate impact

What does it mean for an algorithm to be biased? In U.S. law, unintentio...
research
06/08/2020

Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides

Algorithmic bias is the systematic preferential or discriminatory treatm...
research
11/27/2018

Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved

Assessing the fairness of a decision making system with respect to a pro...

Please sign up or login with your details

Forgot password? Click here to reset