The Variational Fair Autoencoder

11/03/2015
by   Christos Louizos, et al.
0

We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible. Our model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation. Any subsequent processing, such as classification, can then be performed on this purged latent representation. To remove any remaining dependencies we incorporate an additional penalty term based on the "Maximum Mean Discrepancy" (MMD) measure. We discuss how these architectures can be efficiently trained on data and show in experiments that this method is more effective than previous work in removing unwanted sources of variation while maintaining informative latent representations.

READ FULL TEXT
research
03/14/2020

Semi-supervised Disentanglement with Independent Vector Variational Autoencoders

We aim to separate the generative factors of data into two latent vector...
research
11/15/2022

MMD-B-Fair: Learning Fair Representations with Statistical Testing

We introduce a method, MMD-B-Fair, to learn fair representations of data...
research
04/01/2022

Semi-FairVAE: Semi-supervised Fair Representation Learning with Adversarial Variational Autoencoder

Adversarial learning is a widely used technique in fair representation l...
research
10/19/2012

Disentangling Factors of Variation via Generative Entangling

Here we propose a novel model family with the objective of learning to d...
research
06/05/2021

Local Disentanglement in Variational Auto-Encoders Using Jacobian L_1 Regularization

There have been many recent advances in representation learning; however...
research
06/15/2021

Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness

Learning meaningful representations of data that can address challenges ...
research
11/20/2017

Disentangling Factors of Variation by Mixing Them

We propose an unsupervised approach to learn image representations that ...

Please sign up or login with your details

Forgot password? Click here to reset