Fairness and bias correction in machine learning for depression prediction: results from four different study populations

11/10/2022
by   Vien Ngoc Dang, et al.
0

A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine learning (ML) models learned from data can reinforce the structural biases already present in society. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. However, we show that standard mitigation techniques, and our own post-hoc method, can be effective in reducing the level of unfair bias. We provide practical recommendations to develop ML models for depression risk prediction with increased fairness and trust in the real world. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2023

Connecting Fairness in Machine Learning with Public Health Equity

Machine learning (ML) has become a critical tool in public health, offer...
research
03/30/2023

Non-Invasive Fairness in Learning through the Lens of Data Drift

Machine Learning (ML) models are widely employed to drive many modern da...
research
11/17/2022

Monitoring machine learning (ML)-based risk prediction algorithms in the presence of confounding medical interventions

Monitoring the performance of machine learning (ML)-based risk predictio...
research
02/17/2023

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

Ensuring trustworthiness in machine learning (ML) models is a multi-dime...
research
10/04/2021

Fairness and underspecification in acoustic scene classification: The case for disaggregated evaluations

Underspecification and fairness in machine learning (ML) applications ha...
research
08/07/2022

Bias Reducing Multitask Learning on Mental Health Prediction

There has been an increase in research in developing machine learning mo...
research
04/01/2021

Model Selection's Disparate Impact in Real-World Deep Learning Applications

Algorithmic fairness has emphasized the role of biased data in automated...

Please sign up or login with your details

Forgot password? Click here to reset