Data Overlap: A Prerequisite For Disentanglement

02/27/2022
by   Nathan Michlo, et al.
0

Learning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the loss. In this work, we highlight the interaction between data and the reconstruction term of the loss as the main contributor to disentanglement in VAEs. We note that standardised benchmark datasets are constructed in a way that is conducive to learning what appear to be disentangled representations. We design an intuitive adversarial dataset that exploits this mechanism to break existing state-of-the-art disentanglement frameworks. Finally, we provide solutions in the form of a modified reconstruction loss suggesting that VAEs are accidental distance learners.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
04/10/2018

Understanding disentangling in β-VAE

We present new intuitions and theoretical assessments of the emergence o...
research
12/10/2021

On Causally Disentangled Representations

Representation learners that disentangle factors of variation have alrea...
research
04/17/2019

Learning Interpretable Disentangled Representations using Adversarial VAEs

Learning Interpretable representation in medical applications is becomin...
research
05/17/2022

How do Variational Autoencoders Learn? Insights from Representational Similarity

The ability of Variational Autoencoders (VAEs) to learn disentangled rep...
research
08/16/2020

Learning Disentangled Expression Representations from Facial Images

Face images are subject to many different factors of variation, especial...
research
01/21/2019

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

We present a simple neural rendering architecture that helps variational...

Please sign up or login with your details

Forgot password? Click here to reset