Model Selection's Disparate Impact in Real-World Deep Learning Applications

04/01/2021
by   Jessica Zosa Forde, et al.
9

Algorithmic fairness has emphasized the role of biased data in automated decision outcomes. Recently, there has been a shift in attention to sources of bias that implicate fairness in other stages in the ML pipeline. We contend that one source of such bias, human preferences in model selection, remains under-explored in terms of its role in disparate impact across demographic groups. Using a deep learning model trained on real-world medical imaging data, we verify our claim empirically and argue that choice of metric for model comparison can significantly bias model selection outcomes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2022

MEDFAIR: Benchmarking Fairness for Medical Imaging

A multitude of work has shown that machine learning-based medical diagno...
research
09/01/2023

Subjectivity in Unsupervised Machine Learning Model Selection

Model selection is a necessary step in unsupervised machine learning. De...
research
07/07/2021

Bias-Tolerant Fair Classification

The label bias and selection bias are acknowledged as two reasons in dat...
research
11/10/2022

Fairness and bias correction in machine learning for depression prediction: results from four different study populations

A significant level of stigma and inequality exists in mental healthcare...
research
06/07/2021

An Information-theoretic Approach to Distribution Shifts

Safely deploying machine learning models to the real world is often a ch...
research
10/24/2022

Data-IQ: Characterizing subgroups with heterogeneous outcomes in tabular data

High model performance, on average, can hide that models may systematica...
research
06/30/2018

Achieving Fairness through Adversarial Learning: an Application to Recidivism Prediction

Recidivism prediction scores are used across the USA to determine senten...

Please sign up or login with your details

Forgot password? Click here to reset