Interventional Assays for the Latent Space of Autoencoders

06/30/2021
by   Felix Leeb, et al.
2

The encoders and decoders of autoencoders effectively project the input onto learned manifolds in the latent space and data space respectively. We propose a framework, called latent responses, for probing the learned data manifold using interventions in the latent space. Using this framework, we investigate "holes" in the representation to quantitatively ascertain to what extent the latent space of a trained VAE is consistent with the chosen prior. Furthermore, we use the identified structure to improve interpolation between latent vectors. We evaluate how our analyses improve the quality of the generated samples using the VAE on a variety of benchmark datasets.

READ FULL TEXT

page 18

page 19

page 21

page 22

page 23

page 24

page 31

page 32

research
06/18/2020

Variational Autoencoder with Learned Latent Structure

The manifold hypothesis states that high-dimensional data can be modeled...
research
08/04/2020

Faithful Autoencoder Interpolation by Shaping the Latent Space

One of the fascinating properties of deep learning is the ability of the...
research
06/14/2021

A learned conditional prior for the VAE acoustic space of a TTS system

Many factors influence speech yielding different renditions of a given s...
research
03/29/2019

From Variational to Deterministic Autoencoders

Variational Autoencoders (VAEs) provide a theoretically-backed framework...
research
09/22/2021

An Exploration of Learnt Representations of W Jets

I present a Variational Autoencoder (VAE) trained on collider physics da...
research
03/24/2020

Dynamic Narrowing of VAE Bottlenecks Using GECO and L_0 Regularization

When designing variational autoencoders (VAEs) or other types of latent ...
research
06/16/2018

Latent Convolutional Models

We present a new latent model of natural images that can be learned on l...

Please sign up or login with your details

Forgot password? Click here to reset