Training Generative Reversible Networks

Generative models with an encoding component such as autoencoders currently receive great interest. However, training of autoencoders is typically complicated by the need to train a separate encoder and decoder model that have to be enforced to be reciprocal to each other. To overcome this problem, by-design reversible neural networks (RevNets) had been previously used as generative models either directly optimizing the likelihood of the data under the model or using an adversarial approach on the generated data. Here, we instead investigate their performance using an adversary on the latent space in the adversarial autoencoder framework. We investigate the generative performance of RevNets on the CelebA dataset, showing that generative RevNets can generate coherent faces with similar quality as Variational Autoencoders. This first attempt to use RevNets inside the adversarial autoencoder framework slightly underperformed relative to recent advanced generative models using an autoencoder component on CelebA, but this gap may diminish with further optimization of the training setup of generative RevNets. In addition to the experiments on CelebA, we show a proof-of-principle experiment on the MNIST dataset suggesting that adversary-free trained RevNets can discover meaningful latent dimensions without pre-specifying the number of dimensions of the latent sampling distribution. In summary, this study shows that RevNets can be employed in different generative training settings.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
06/05/2018

Generative Reversible Networks

Generative models with an encoding component such as autoencoders curren...
research
06/25/2019

Perceptual Generative Autoencoders

Modern generative models are usually designed to match target distributi...
research
02/25/2020

Batch norm with entropic regularization turns deterministic autoencoders into generative models

The variational autoencoder is a well defined deep generative model that...
research
04/15/2019

Processsing Simple Geometric Attributes with Autoencoders

Image synthesis is a core problem in modern deep learning, and many rece...
research
07/25/2019

Y-Autoencoders: disentangling latent representations via sequential-encoding

In the last few years there have been important advancements in generati...
research
12/22/2018

Can VAEs Generate Novel Examples?

An implicit goal in works on deep generative models is that such models ...
research
04/06/2018

Associative Compression Networks

This paper introduces Associative Compression Networks (ACNs), a new fra...

Please sign up or login with your details

Forgot password? Click here to reset