Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

10/27/2021
by   Ben Glocker, et al.
8

It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. A machine learning model may pick up undesirable correlations, for example, between a patient's racial identity and clinical outcome. Such correlations are often present in (historical) data used for model development. There has been an increase in studies reporting biases in disease detection models across patient subgroups. Besides the scarcity of data from underserved populations, very little is known about how these biases are encoded and how one may reduce or even remove disparate performance. There is some speculation whether algorithms may recognize patient characteristics such as biological sex or racial identity, and then directly or indirectly use this information when making predictions. But it remains unclear how we can establish whether such information is actually used. This article aims to shed some light on these issues by exploring new methodology allowing intuitive inspections of the inner working of machine learning models for image-based detection of disease. We also evaluate an effective yet debatable technique for addressing disparities leveraging the automatic prediction of patient characteristics, resulting in models with comparable true and false positive rates across subgroups. Our findings may stimulate the discussion about safe and ethical use of AI.

READ FULL TEXT

page 1

page 6

page 13

research
02/03/2022

Net benefit, calibration, threshold selection, and training objectives for algorithmic fairness in healthcare

A growing body of work uses the paradigm of algorithmic fairness to fram...
research
03/30/2023

Developing a Robust Computable Phenotype Definition Workflow to Describe Health and Disease in Observational Health Research

Health informatics can inform decisions that practitioners, patients, po...
research
11/08/2020

FairLens: Auditing Black-box Clinical Decision Support Systems

The pervasive application of algorithmic decision-making is raising conc...
research
11/12/2021

Explaining medical AI performance disparities across sites with confounder Shapley value analysis

Medical AI algorithms can often experience degraded performance when eva...
research
09/09/2021

Fair Conformal Predictors for Applications in Medical Imaging

Deep learning has the potential to augment many components of the clinic...
research
07/25/2022

Representational Ethical Model Calibration

Equity is widely held to be fundamental to the ethics of healthcare. In ...

Please sign up or login with your details

Forgot password? Click here to reset