Fair inference on error-prone outcomes

03/17/2020
by   Laura Boeschoten, et al.
0

Fair inference in supervised learning is an important and active area of research, yielding a range of useful methods to assess and account for fairness criteria when predicting ground truth targets. As shown in recent work, however, when target labels are error-prone, potential prediction unfairness can arise from measurement error. In this paper, we show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest. To remedy this problem, we suggest a framework resulting from the combination of two existing literatures: fair ML methods, such as those found in the counterfactual fairness literature on the one hand, and, on the other, measurement models found in the statistical literature. We discuss these approaches and their connection resulting in our framework. In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/01/2021

FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes

Methods for building fair predictors often involve tradeoffs between fai...
research
07/14/2019

Counterfactual Reasoning for Fair Clinical Risk Prediction

The use of machine learning systems to support decision making in health...
research
07/08/2020

Whither Fair Clustering?

Within the relatively busy area of fair machine learning that has been d...
research
06/18/2022

Fair Generalized Linear Models with a Convex Penalty

Despite recent advances in algorithmic fairness, methodologies for achie...
research
06/23/2023

Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

Prediction models have been widely adopted as the basis for decision-mak...
research
08/03/2023

Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

Bias in applications of machine learning (ML) to healthcare is usually a...
research
12/21/2022

Consistent Range Approximation for Fair Predictive Modeling

This paper proposes a novel framework for certifying the fairness of pre...

Please sign up or login with your details

Forgot password? Click here to reset