Learning Hierarchical Priors in VAEs

05/13/2019
by   Alexej Klushyn, et al.
0

We propose to learn a hierarchical prior in the context of variational autoencoders. Our aim is to avoid over-regularisation resulting from a simplistic prior like a standard normal distribution. To incentivise an informative latent representation of the data by learning a rich hierarchical prior, we formulate the objective function as the Lagrangian of a constrained-optimisation problem and propose an optimisation algorithm inspired by Taming VAEs. To validate our approach, we train our model on the static and binary MNIST, Fashion-MNIST, OMNIGLOT, CMU Graphics Lab Motion Capture, 3D Faces, and 3D Chairs datasets, obtaining results that are comparable to state-of-the-art. Furthermore, we introduce a graph-based interpolation method to show that the topology of the learned latent representation correspond to the topology of the data manifold.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset