-
Machine Learning Applications on Neuroimaging for Diagnosis and Prognosis of Epilepsy: A Review
Machine learning is playing an increasing important role in medical imag...
read it
-
Bias-Resilient Neural Network
Presence of bias and confounding effects is inarguably one of the most c...
read it
-
The HAM10000 Dataset: A Large Collection of Multi-Source Dermatoscopic Images of Common Pigmented Skin Lesions
Training of neural networks for automated diagnosis of pigmented skin le...
read it
-
Medical Selfies: Emotional Impacts and Practical Challenges
Medical images taken with mobile phones by patients, i.e. medical selfie...
read it
-
Out-of-Distribution Detection for Dermoscopic Image Classification
Medical image diagnosis can be achieved by deep neural networks, provide...
read it
-
Diagnostic Accuracy of Content Based Dermatoscopic Image Retrieval with Deep Classification Features
Background: Automated classification of medical images through neural ne...
read it
-
Estimating and Improving Fairness with Adversarial Learning
Fairness and accountability are two essential pillars for trustworthy Ar...
read it
Risk of Training Diagnostic Algorithms on Data with Demographic Bias
One of the critical challenges in machine learning applications is to have fair predictions. There are numerous recent examples in various domains that convincingly show that algorithms trained with biased datasets can easily lead to erroneous or discriminatory conclusions. This is even more crucial in clinical applications where the predictive algorithms are designed mainly based on a limited or given set of medical images and demographic variables such as age, sex and race are not taken into account. In this work, we conduct a survey of the MICCAI 2018 proceedings to investigate the common practice in medical image analysis applications. Surprisingly, we found that papers focusing on diagnosis rarely describe the demographics of the datasets used, and the diagnosis is purely based on images. In order to highlight the importance of considering the demographics in diagnosis tasks, we used a publicly available dataset of skin lesions. We then demonstrate that a classifier with an overall area under the curve (AUC) of 0.83 has variable performance between 0.76 and 0.91 on subgroups based on age and sex, even though the training set was relatively balanced. Moreover, we show that it is possible to learn unbiased features by explicitly using demographic variables in an adversarial training setup, which leads to balanced scores per subgroups. Finally, we discuss the implications of these results and provide recommendations for further research.
READ FULL TEXT
Comments
There are no comments yet.