Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

07/14/2020
by   Yaniv Yacoby, et al.
0

Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks. While it has been demonstrated that VAE training can suffer from a number of pathologies, existing literature lacks characterizations of exactly when these pathologies occur and how they impact down-stream task performance. In this paper we concretely characterize conditions under which VAE training exhibits pathologies and connect these failure modes to undesirable effects on specific downstream tasks - learning compressed and disentangled representations, adversarial robustness and semi-supervised learning.

READ FULL TEXT

page 5

page 24

page 26

page 27

page 28

page 29

page 30

page 31

research
05/28/2022

Improving VAE-based Representation Learning

Latent variable models like the Variational Auto-Encoder (VAE) are commo...
research
10/26/2020

Robust Disentanglement of a Few Factors at a Time

Disentanglement is at the forefront of unsupervised learning, as disenta...
research
07/28/2021

Unsupervised Learning of Neurosymbolic Encoders

We present a framework for the unsupervised learning of neurosymbolic en...
research
07/19/2023

Symmetric Equilibrium Learning of VAEs

We view variational autoencoders (VAE) as decoder-encoder pairs, which m...
research
09/15/2022

Gromov-Wasserstein Autoencoders

Learning concise data representations without supervisory signals is a f...
research
07/17/2023

Fast model inference and training on-board of Satellites

Artificial intelligence onboard satellites has the potential to reduce d...
research
03/02/2023

DAVA: Disentangling Adversarial Variational Autoencoder

The use of well-disentangled representations offers many advantages for ...

Please sign up or login with your details

Forgot password? Click here to reset