Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

11/16/2022
by   Anaelia Ovalle, et al.
25

Auditing machine learning-based (ML) healthcare tools for bias is critical to preventing patient harm, especially in communities that disproportionately face health inequities. General frameworks are becoming increasingly available to measure ML fairness gaps between groups. However, ML for health (ML4H) auditing principles call for a contextual, patient-centered approach to model assessment. Therefore, ML auditing tools must be (1) better aligned with ML4H auditing principles and (2) able to illuminate and characterize communities vulnerable to the most harm. To address this gap, we propose supplementing ML4H auditing frameworks with SLOGAN (patient Severity-based LOcal Group biAs detectioN), an automatic tool for capturing local biases in a clinical prediction task. SLOGAN adapts an existing tool, LOGAN (LOcal Group biAs detectioN), by contextualizing group bias detection in patient illness severity and past medical history. We investigate and compare SLOGAN's bias detection capabilities to LOGAN and other clustering techniques across patient subgroups in the MIMIC-III dataset. On average, SLOGAN identifies larger fairness disparities in over 75 clustering quality. Furthermore, in a diabetes case study, health disparity literature corroborates the characterizations of the most biased clusters identified by SLOGAN. Our results contribute to the broader discussion of how machine learning biases may perpetuate existing healthcare disparities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2023

Connecting Fairness in Machine Learning with Public Health Equity

Machine learning (ML) has become a critical tool in public health, offer...
research
08/03/2023

Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

Bias in applications of machine learning (ML) to healthcare is usually a...
research
03/23/2023

FraudAuditor: A Visual Analytics Approach for Collusive Fraud in Health Insurance

Collusive fraud, in which multiple fraudsters collude to defraud health ...
research
11/08/2022

Algorithmic Bias in Machine Learning Based Delirium Prediction

Although prediction models for delirium, a commonly occurring condition ...
research
07/21/2022

Detecting and Preventing Shortcut Learning for Fair Medical AI using Shortcut Testing (ShorT)

Machine learning (ML) holds great promise for improving healthcare, but ...
research
06/27/2022

Prisoners of Their Own Devices: How Models Induce Data Bias in Performative Prediction

The unparalleled ability of machine learning algorithms to learn pattern...
research
04/21/2022

A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms

Motivated by the growing importance of reducing unfairness in ML predict...

Please sign up or login with your details

Forgot password? Click here to reset