Projection-wise Disentangling for Fair and Interpretable Representation Learning: Application to 3D Facial Shape Analysis

06/25/2021
by   Xianjing Liu, et al.
0

Confounding bias is a crucial problem when applying machine learning to practice, especially in clinical practice. We consider the problem of learning representations independent to multiple biases. In literature, this is mostly solved by purging the bias information from learned representations. We however expect this strategy to harm the diversity of information in the representation, and thus limiting its prospective usage (e.g., interpretation). Therefore, we propose to mitigate the bias while keeping almost all information in the latent representations, which enables us to observe and interpret them as well. To achieve this, we project latent features onto a learned vector direction, and enforce the independence between biases and projected features rather than all learned features. To interpret the mapping between projected features and input data, we propose projection-wise disentangling: a sampling and reconstruction along the learned vector direction. The proposed method was evaluated on the analysis of 3D facial shape and patient characteristics (N=5011). Experiments showed that this conceptually simple method achieved state-of-the-art fair prediction performance and interpretability, showing its great potential for clinical applications.

READ FULL TEXT

page 7

page 8

page 11

research
06/05/2023

Fair Patient Model: Mitigating Bias in the Patient Representation Learned from the Electronic Health Records

Objective: To pre-train fair and unbiased patient representations from E...
research
11/14/2017

Unsupervised patient representations from clinical notes with interpretable classification decisions

We have two main contributions in this work: 1. We explore the usage of ...
research
04/18/2021

Fair Representation Learning for Heterogeneous Information Networks

Recently, much attention has been paid to the societal impact of AI, esp...
research
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
research
07/03/2018

Patient representation learning and interpretable evaluation using clinical notes

We have three contributions in this work: 1. We explore the utility of a...
research
03/30/2021

Unsupervised Disentanglement of Linear-Encoded Facial Semantics

We propose a method to disentangle linear-encoded facial semantics from ...

Please sign up or login with your details

Forgot password? Click here to reset