FairAdaBN: Mitigating unfairness with adaptive batch normalization and its application to dermatological disease classification

03/15/2023
by   Zikang Xu, et al.
0

Deep learning is becoming increasingly ubiquitous in medical research and applications while involving sensitive information and even critical diagnosis decisions. Researchers observe a significant performance disparity among subgroups with different demographic attributes, which is called model unfairness, and put lots of effort into carefully designing elegant architectures to address unfairness, which poses heavy training burden, brings poor generalization, and reveals the trade-off between model performance and fairness. To tackle these issues, we propose FairAdaBN by making batch normalization adaptive to sensitive attribute. This simple but effective design can be adopted to several classification backbones that are originally unaware of fairness. Additionally, we derive a novel loss function that restrains statistical parity between subgroups on mini-batches, encouraging the model to converge with considerable fairness. In order to evaluate the trade-off between model performance and fairness, we propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop. Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2023

Fairness Under Demographic Scarce Regime

Most existing works on fairness assume the model has full access to demo...
research
01/31/2023

Superhuman Fairness

The fairness of machine learning-based decisions has become an increasin...
research
07/07/2022

Enhancing Fairness of Visual Attribute Predictors

The performance of deep neural networks for image recognition tasks such...
research
10/28/2022

Mitigating Health Disparities in EHR via Deconfounder

Health disparities, or inequalities between different patient demographi...
research
06/20/2022

Achieving Utility, Fairness, and Compactness via Tunable Information Bottleneck Measures

Designing machine learning algorithms that are accurate yet fair, not di...
research
06/25/2019

Learning Fair and Transferable Representations

Developing learning methods which do not discriminate subgroups in the p...
research
11/18/2022

Distributionally Robust Survival Analysis: A Novel Fairness Loss Without Demographics

We propose a general approach for training survival analysis models that...

Please sign up or login with your details

Forgot password? Click here to reset