Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

08/03/2023
by   Eran Tal, et al.
0

Bias in applications of machine learning (ML) to healthcare is usually attributed to unrepresentative or incomplete data, or to underlying health disparities. This article identifies a more pervasive source of bias that affects the clinical utility of ML-enabled prediction tools: target specification bias. Target specification bias arises when the operationalization of the target variable does not match its definition by decision makers. The mismatch is often subtle, and stems from the fact that decision makers are typically interested in predicting the outcomes of counterfactual, rather than actual, healthcare scenarios. Target specification bias persists independently of data limitations and health disparities. When left uncorrected, it gives rise to an overestimation of predictive accuracy, to inefficient utilization of medical resources, and to suboptimal decisions that can harm patients. Recent work in metrology - the science of measurement - suggests ways of counteracting target specification bias and avoiding its harmful consequences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2022

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Auditing machine learning-based (ML) healthcare tools for bias is critic...
research
06/23/2023

Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

Prediction models have been widely adopted as the basis for decision-mak...
research
02/16/2023

Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

The increasing use of Machine Learning (ML) software can lead to unfair ...
research
11/04/2019

Understanding racial bias in health using the Medical Expenditure Panel Survey data

Over the years, several studies have demonstrated that there exist signi...
research
03/17/2020

Fair inference on error-prone outcomes

Fair inference in supervised learning is an important and active area of...
research
10/03/2022

An intersectional framework for counterfactual fairness in risk prediction

Along with the increasing availability of data in many sectors has come ...
research
05/22/2023

Advancing Community Engaged Approaches to Identifying Structural Drivers of Racial Bias in Health Diagnostic Algorithms

Much attention and concern has been raised recently about bias and the u...

Please sign up or login with your details

Forgot password? Click here to reset