Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

09/26/2021
by   Lisa Bonheme, et al.
0

The ability of Variational Autoencoders to learn disentangled representations has made them appealing for practical applications. However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured. In this paper, we refine this observation through the lens of selective posterior collapse, which states that only a subset of the learned representations, the active variables, is encoding useful information while the rest (the passive variables) is discarded. We first extend the existing definition, originally proposed for sampled representations, to mean representations and show that active variables are equally disentangled in both representations. Based on this new definition and the pre-trained models from disentanglement lib, we then isolate the passive variables and show that they are responsible for the discrepancies between mean and sampled representations. Specifically, passive variables exhibit high correlation scores with other variables in mean representations while being fully uncorrelated in sampled ones. We thus conclude that despite what their higher correlation might suggest, mean representations are still good candidates for downstream tasks applications. However, it may be beneficial to remove their passive variables, especially when used with models sensitive to correlated features.

READ FULL TEXT

page 10

page 15

page 18

page 21

page 22

page 24

page 25

research
05/17/2022

How do Variational Autoencoders Learn? Insights from Representational Similarity

The ability of Variational Autoencoders (VAEs) to learn disentangled rep...
research
09/12/2020

Revisiting Factorizing Aggregated Posterior in Learning Disentangled Representations

In the problem of learning disentangled representations, one of the prom...
research
09/26/2022

FONDUE: an algorithm to find the optimal dimensionality of the latent representations of variational autoencoders

When training a variational autoencoder (VAE) on a given dataset, determ...
research
05/31/2019

On the Fairness of Disentangled Representations

Recently there has been a significant interest in learning disentangled ...
research
05/19/2022

Disentangling Active and Passive Cosponsorship in the U.S. Congress

In the U.S. Congress, legislators can use active and passive cosponsorsh...
research
04/14/2021

Disentangling Representations of Text by Masking Transformers

Representations from large pretrained models such as BERT encode a range...
research
04/10/2023

Reinforcement Learning from Passive Data via Latent Intentions

Passive observational data, such as human videos, is abundant and rich i...

Please sign up or login with your details

Forgot password? Click here to reset