Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement

02/01/2022
by   Xiaoting Shao, et al.
7

A key assumption of most statistical machine learning methods is that they have access to independent samples from the distribution of data they encounter at test time. As such, these methods often perform poorly in the face of biased data, which breaks this assumption. In particular, machine learning models have been shown to exhibit Clever-Hans-like behaviour, meaning that spurious correlations in the training set are inadvertently learnt. A number of works have been proposed to revise deep classifiers to learn the right correlations. However, generative models have been overlooked so far. We observe that generative models are also prone to Clever-Hans-like behaviour. To counteract this issue, we propose to debias generative models by disentangling their internal representations, which is achieved via human feedback. Our experiments show that this is effective at removing bias even when human feedback covers only a small fraction of the desired distribution. In addition, we achieve strong disentanglement results in a quantitative comparison with recent methods.

READ FULL TEXT

page 2

page 6

page 9

research
10/26/2019

Fair Generative Modeling via Weak Supervision

Real-world datasets are often biased with respect to key demographic fac...
research
09/16/2023

A Statistical Turing Test for Generative Models

The emergence of human-like abilities of AI systems for content generati...
research
10/06/2019

Searching for an (un)stable equilibrium: experiments in training generative models without data

This paper details a developing artistic practice around an ongoing seri...
research
04/29/2019

Generative models as parsimonious descriptions of sensorimotor loops

The Bayesian brain hypothesis, predictive processing and variational fre...
research
09/25/2019

Input complexity and out-of-distribution detection with likelihood-based generative models

Likelihood-based generative models are a promising resource to detect ou...
research
05/19/2021

Copyright in Generative Deep Learning

Machine-generated artworks are now part of the contemporary art scene: t...
research
01/24/2022

Hiding Behind Backdoors: Self-Obfuscation Against Generative Models

Attack vectors that compromise machine learning pipelines in the physica...

Please sign up or login with your details

Forgot password? Click here to reset