Predict Responsibly: Increasing Fairness by Learning To Defer

11/17/2017
by   David Madras, et al.
0

Machine learning systems, which are often used for high-stakes decisions, suffer from two mutually reinforcing problems: unfairness and opaqueness. Many popular models, although generally accurate, cannot express uncertainty about their predictions. Even in regimes where a model is inaccurate, users may trust the model's predictions too fully, and allow its biases to reinforce the user's own. In this work, we explore models that learn to defer. In our scheme, a model learns to classify accurately and fairly, but also to defer if necessary, passing judgment to a downstream decision-maker such as a human user. We further propose a learning algorithm which accounts for potential biases held by decision-makers later in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2021

Towards Unbiased and Accurate Deferral to Multiple Experts

Machine learning models are often implemented in cohort with humans in t...
research
06/13/2022

A Machine Learning Model for Predicting, Diagnosing, and Mitigating Health Disparities in Hospital Readmission

The management of hyperglycemia in hospitalized patients has a significa...
research
07/30/2020

Fairness-Aware Online Personalization

Decision making in crucial applications such as lending, hiring, and col...
research
05/31/2022

Social Bias Meets Data Bias: The Impacts of Labeling and Measurement Errors on Fairness Criteria

Although many fairness criteria have been proposed to ensure that machin...
research
09/23/2020

Fair Meta-Learning For Few-Shot Classification

Artificial intelligence nowadays plays an increasingly prominent role in...
research
02/05/2021

Removing biased data to improve fairness and accuracy

Machine learning systems are often trained using data collected from his...
research
08/11/2022

Dbias: Detecting biases and ensuring Fairness in news articles

Because of the increasing use of data-centric systems and algorithms in ...

Please sign up or login with your details

Forgot password? Click here to reset