InfoVAEGAN : learning joint interpretable representations by information maximization and maximum likelihood

07/09/2021
by   Fei Ye, et al.
0

Learning disentangled and interpretable representations is an important step towards accomplishing comprehensive data representations on the manifold. In this paper, we propose a novel representation learning algorithm which combines the inference abilities of Variational Autoencoders (VAE) with the generalization capability of Generative Adversarial Networks (GAN). The proposed model, called InfoVAEGAN, consists of three networks : Encoder, Generator and Discriminator. InfoVAEGAN aims to jointly learn discrete and continuous interpretable representations in an unsupervised manner by using two different data-free log-likelihood functions onto the variables sampled from the generator's distribution. We propose a two-stage algorithm for optimizing the inference network separately from the generator training. Moreover, we enforce the learning of interpretable representations through the maximization of the mutual information between the existing latent variables and those created through generative and inference processes.

READ FULL TEXT

page 3

page 4

research
06/12/2016

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

This paper describes InfoGAN, an information-theoretic extension to the ...
research
10/02/2021

Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis Expansion

Disentanglement learning aims to construct independent and interpretable...
research
05/27/2018

Generative Adversarial Image Synthesis with Decision Tree Latent Controller

This paper proposes the decision tree latent controller generative adver...
research
03/31/2018

Joint-VAE: Learning Disentangled Joint Continuous and Discrete Representations

We present a framework for learning disentangled and interpretable joint...
research
02/26/2017

Maximum-Likelihood Augmented Discrete Generative Adversarial Networks

Despite the successes in capturing continuous distributions, the applica...
research
01/21/2023

Versatile Neural Processes for Learning Implicit Neural Representations

Representing a signal as a continuous function parameterized by neural n...
research
07/05/2018

Learning in Variational Autoencoders with Kullback-Leibler and Renyi Integral Bounds

In this paper we propose two novel bounds for the log-likelihood based o...

Please sign up or login with your details

Forgot password? Click here to reset