PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders

07/13/2020
by   Yanjun Li, et al.
0

Although substantial efforts have been made to learn disentangled representations under the variational autoencoder (VAE) framework, the fundamental properties to the dynamics of learning of most VAE models still remain unknown and under-investigated. In this work, we first propose a novel learning objective, termed the principle-of-relevant-information variational autoencoder (PRI-VAE), to learn disentangled representations. We then present an information-theoretic perspective to analyze existing VAE models by inspecting the evolution of some critical information-theoretic quantities across training epochs. Our observations unveil some fundamental properties associated with VAEs. Empirical results also demonstrate the effectiveness of PRI-VAE on four benchmark data sets.

READ FULL TEXT

page 13

page 26

08/02/2018

Variational Information Bottleneck on Vector Quantized Autoencoders

In this paper, we provide an information-theoretic interpretation of the...
04/10/2018

Understanding disentangling in β-VAE

We present new intuitions and theoretical assessments of the emergence o...
05/24/2017

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

We would like to learn a representation of the data which decomposes an ...
05/21/2018

Invariant Representations from Adversarially Censored Autoencoders

We combine conditional variational autoencoders (VAE) with adversarial c...
09/12/2019

Generating Data using Monte Carlo Dropout

For many analytical problems the challenge is to handle huge amounts of ...
01/22/2020

A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor Analysis

Deep learning methods are the gold standard for non-linear statistical m...
05/27/2019

Wyner VAE: Joint and Conditional Generation with Succinct Common Representation Learning

A new variational autoencoder (VAE) model is proposed that learns a succ...