Harvard Glaucoma Fairness: A Retinal Nerve Disease Dataset for Fairness Learning and Fair Identity Normalization

06/15/2023
by   Yan Luo, et al.
2

Fairness in machine learning is important for societal well-being, but limited public datasets hinder its progress. Currently, no dedicated public medical datasets with imaging data for fairness learning are available, though minority groups suffer from more health issues. To address this gap, we introduce Harvard Glaucoma Fairness (Harvard-GF), a retinal nerve disease dataset with both 2D and 3D imaging data and balanced racial groups for glaucoma detection. Glaucoma is the leading cause of irreversible blindness globally with Blacks having doubled glaucoma prevalence than other races. We also propose a fair identity normalization (FIN) approach to equalize the feature importance between different identity groups. Our FIN approach is compared with various the-state-of-the-arts fairness learning methods with superior performance in both racial and gender fairness tasks with 2D and 3D imaging data, which demonstrate the utilities of our dataset Harvard-GF for fairness learning. To facilitate fairness comparisons between different models, we propose an equity-scaled performance measure, which can be flexibly used to compare all kinds of performance metrics in the context of fairness. The dataset and code are publicly accessible via https://doi.org/10.7910/DVN/A4XMO1 and https://github.com/luoyan407/Harvard-GF, respectively.

READ FULL TEXT
research
06/15/2023

FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods

This paper introduces the Fair Fairness Benchmark (), a benchmarking fra...
research
10/25/2021

Fair Enough: Searching for Sufficient Measures of Fairness

Testing machine learning software for ethical bias has become a pressing...
research
03/10/2020

Addressing multiple metrics of group fairness in data-driven decision making

The Fairness, Accountability, and Transparency in Machine Learning (FAT-...
research
03/25/2023

Fairness meets Cross-Domain Learning: a new perspective on Models and Metrics

Deep learning-based recognition systems are deployed at scale for severa...
research
11/23/2022

Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation

Researchers have proposed many methods for fair and robust machine learn...
research
08/10/2021

Retiring Adult: New Datasets for Fair Machine Learning

Although the fairness community has recognized the importance of data, r...
research
09/07/2023

TIDE: Textual Identity Detection for Evaluating and Augmenting Classification and Language Models

Machine learning models can perpetuate unintended biases from unfair and...

Please sign up or login with your details

Forgot password? Click here to reset