On Fairness of Medical Image Classification with Multiple Sensitive Attributes via Learning Orthogonal Representations

01/04/2023
by   Wenlong Deng, et al.
18

Mitigating the discrimination of machine learning models has gained increasing attention in medical image analysis. However, rare works focus on fair treatments for patients with multiple sensitive demographic ones, which is a crucial yet challenging problem for real-world clinical applications. In this paper, we propose a novel method for fair representation learning with respect to multi-sensitive attributes. We pursue the independence between target and multi-sensitive representations by achieving orthogonality in the representation space. Concretely, we enforce the column space orthogonality by keeping target information on the complement of a low-rank sensitive space. Furthermore, in the row space, we encourage feature dimensions between target and sensitive representations to be orthogonal. The effectiveness of the proposed method is demonstrated with extensive experiments on the CheXpert dataset. To our best knowledge, this is the first work to mitigate unfairness with respect to multiple sensitive attributes in the field of medical imaging.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

Flexibly Fair Representation Learning by Disentanglement

We consider the problem of learning representations that achieve group a...
research
05/02/2023

Are demographically invariant models and representations in medical imaging fair?

Medical imaging models have been shown to encode information about patie...
research
07/04/2023

Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis

Trustworthy deployment of deep learning medical imaging models into real...
research
03/12/2020

Fairness by Learning Orthogonal Disentangled Representations

Learning discriminative powerful representations is a crucial step for m...
research
01/11/2021

Learning to Ignore: Fair and Task Independent Representations

Training fair machine learning models, aiming for their interpretability...
research
12/17/2018

BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity

We introduce the BriarPatch, a pixel-space intervention that obscures se...
research
09/02/2022

Exploiting Fairness to Enhance Sensitive Attributes Reconstruction

In recent years, a growing body of work has emerged on how to learn mach...

Please sign up or login with your details

Forgot password? Click here to reset