Investigating underdiagnosis of AI algorithms in the presence of multiple sources of dataset bias

01/19/2022
by   Melanie Bernhardt, et al.
27

Deep learning models have shown great potential for image-based diagnosis assisting clinical decision making. At the same time, an increasing number of reports raise concerns about the potential risk that machine learning could amplify existing health disparities due to human biases that are embedded in the training data. It is of great importance to carefully investigate the extent to which biases may be reproduced or even amplified if we wish to build fair artificial intelligence systems. Seyyed-Kalantari et al. advance this conversation by analysing the performance of a disease classifier across population subgroups. They raise performance disparities related to underdiagnosis as a point of concern; we identify areas from this analysis which we believe deserve additional attention. Specifically, we wish to highlight some theoretical and practical difficulties associated with assessing model fairness through testing on data drawn from the same biased distribution as the training data, especially when the sources and amount of biases are unknown.

READ FULL TEXT
research
06/04/2021

Towards Fairness Certification in Artificial Intelligence

Thanks to the great progress of machine learning in the last years, seve...
research
02/13/2023

Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based Recruitment

The presence of decision-making algorithms in society is rapidly increas...
research
01/10/2022

Fairness Score and Process Standardization: Framework for Fairness Certification in Artificial Intelligence Systems

Decisions made by various Artificial Intelligence (AI) systems greatly i...
research
09/12/2020

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

With the aim of studying how current multimodal AI algorithms based on h...
research
09/13/2022

Investigating Bias with a Synthetic Data Generator: Empirical Evidence and Philosophical Interpretation

Machine learning applications are becoming increasingly pervasive in our...
research
11/03/2020

(Un)fairness in Post-operative Complication Prediction Models

With the current ongoing debate about fairness, explainability and trans...

Please sign up or login with your details

Forgot password? Click here to reset