The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders

07/09/2021
by   Divyansh Pareek, et al.
0

Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential(encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises: how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model? In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is "strongly invertible" (in a sense we suitably formalize), the inferential model need not be much more complex. Conversely, we prove that there exist non-invertible generative maps, for which the encoding direction needs to be exponentially larger (under standard assumptions in computational complexity). Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice (e.g. convolution and pooling based networks). Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2019

Learning Deep Generative Models with Short Run Inference Dynamics

This paper studies the fundamental problem of learning deep generative m...
research
02/25/2020

Batch norm with entropic regularization turns deterministic autoencoders into generative models

The variational autoencoder is a well defined deep generative model that...
research
06/05/2018

Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
03/17/2020

Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
10/07/2020

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Probabilistic models with hierarchical-latent-variable structures provid...
research
12/19/2018

Fast Approximate Geodesics for Deep Generative Models

The length of the geodesic between two data points along the Riemannian ...
research
06/20/2022

Identifiability of deep generative models under mixture priors without auxiliary information

We prove identifiability of a broad class of deep latent variable models...

Please sign up or login with your details

Forgot password? Click here to reset