How do Variational Autoencoders Learn? Insights from Representational Similarity

05/17/2022
by   Lisa Bonheme, et al.
0

The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them popular for practical applications. However, their behaviour is not yet fully understood. For example, the questions of when they can provide disentangled representations, or suffer from posterior collapse are still areas of active research. Despite this, there are no layerwise comparisons of the representations learned by VAEs, which would further our understanding of these models. In this paper, we thus look into the internal behaviour of VAEs using representational similarity techniques. Specifically, using the CKA and Procrustes similarities, we found that the encoders' representations are learned long before the decoders', and this behaviour is independent of hyperparameters, learning objectives, and datasets. Moreover, the encoders' representations up to the mean and variance layers are similar across hyperparameters and learning objectives.

READ FULL TEXT

page 6

page 8

page 16

research
09/26/2021

Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

The ability of Variational Autoencoders to learn disentangled representa...
research
11/24/2017

Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders

The notion of disentangled autoencoders was proposed as an extension to ...
research
12/03/2019

Singing Voice Conversion with Disentangled Representations of Singer and Vocal Technique Using Variational Autoencoders

We propose a flexible framework that deals with both singer conversion a...
research
04/27/2018

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders

Generative models that learn disentangled representations for different ...
research
02/27/2022

Data Overlap: A Prerequisite For Disentanglement

Learning disentangled representations with variational autoencoders (VAE...
research
03/02/2023

DAVA: Disentangling Adversarial Variational Autoencoder

The use of well-disentangled representations offers many advantages for ...
research
01/21/2019

Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs

We present a simple neural rendering architecture that helps variational...

Please sign up or login with your details

Forgot password? Click here to reset