Trading Information between Latents in Hierarchical Variational Autoencoders

02/09/2023
by   Tim Z. Xiao, et al.
0

Variational Autoencoders (VAEs) were originally motivated (Kingma Welling, 2014) as probabilistic generative models in which one performs approximate Bayesian inference. The proposal of β-VAEs (Higgins et al., 2017) breaks this interpretation and generalizes VAEs to application domains beyond generative modeling (e.g., representation learning, clustering, or lossy data compression) by introducing an objective function that allows practitioners to trade off between the information content ("bit rate") of the latent representation and the distortion of reconstructed data (Alemi et al., 2018). In this paper, we reconsider this rate/distortion trade-off in the context of hierarchical VAEs, i.e., VAEs with more than one layer of latent variables. We identify a general class of inference models for which one can split the rate into contributions from each layer, which can then be tuned independently. We derive theoretical bounds on the performance of downstream tasks as functions of the individual layers' rates and verify our theoretical findings in large-scale experiments. Our results provide guidance for practitioners on which region in rate-space to target for a given application.

READ FULL TEXT
research
10/19/2020

Hierarchical Autoregressive Modeling for Neural Video Compression

Recent work by Marino et al. (2020) showed improved performance in seque...
research
11/01/2017

An Information-Theoretic Analysis of Deep Latent-Variable Models

We present an information-theoretic framework for understanding trade-of...
research
02/14/2018

DVAE++: Discrete Variational Autoencoders with Overlapping Transformations

Training of discrete latent variable models remains challenging because ...
research
08/27/2019

Text Modeling with Syntax-Aware Variational Autoencoders

Syntactic information contains structures and rules about how text sente...
research
01/14/2020

Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (GIN)

A central question of representation learning asks under which condition...
research
02/09/2022

Covariate-informed Representation Learning with Samplewise Optimal Identifiable Variational Autoencoders

Recently proposed identifiable variational autoencoder (iVAE, Khemakhem ...
research
04/15/2019

Exact Rate-Distortion in Autoencoders via Echo Noise

Compression is at the heart of effective representation learning. Howeve...

Please sign up or login with your details

Forgot password? Click here to reset