PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders

by   Yanjun Li, et al.

Although substantial efforts have been made to learn disentangled representations under the variational autoencoder (VAE) framework, the fundamental properties to the dynamics of learning of most VAE models still remain unknown and under-investigated. In this work, we first propose a novel learning objective, termed the principle-of-relevant-information variational autoencoder (PRI-VAE), to learn disentangled representations. We then present an information-theoretic perspective to analyze existing VAE models by inspecting the evolution of some critical information-theoretic quantities across training epochs. Our observations unveil some fundamental properties associated with VAEs. Empirical results also demonstrate the effectiveness of PRI-VAE on four benchmark data sets.


page 13

page 26


Variational Information Bottleneck on Vector Quantized Autoencoders

In this paper, we provide an information-theoretic interpretation of the...

Understanding disentangling in β-VAE

We present new intuitions and theoretical assessments of the emergence o...

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

We would like to learn a representation of the data which decomposes an ...

Invariant Representations from Adversarially Censored Autoencoders

We combine conditional variational autoencoders (VAE) with adversarial c...

Generating Data using Monte Carlo Dropout

For many analytical problems the challenge is to handle huge amounts of ...

A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor Analysis

Deep learning methods are the gold standard for non-linear statistical m...

Wyner VAE: Joint and Conditional Generation with Succinct Common Representation Learning

A new variational autoencoder (VAE) model is proposed that learns a succ...