The variational infomax autoencoder

05/25/2019
by   Vincenzo Crescimanna, et al.
0

We propose the Variational InfoMax AutoEncoder (VIMAE), a method to train a generative model, maximizing the variational lower bound of the mutual information between the visible data and the hidden representation, maintaining bounded the capacity of the network. In the paper we investigate the capacity role in a neural network and deduce that a small capacity network tends to learn a more robust and disentangled representation than an high capacity one. Such observations are confirmed by the computational experiments.

READ FULL TEXT

page 7

page 8

research
04/18/2019

Disentangled Representation Learning with Information Maximizing Autoencoder

Learning disentangled representation from any unlabelled data is a non-t...
research
01/23/2019

An information theoretic approach to the autoencoder

We present a variation of the Autoencoder (AE) that explicitly maximizes...
research
03/07/2020

The Variational InfoMax Learning Objective

Bayesian Inference and Information Bottleneck are the two most popular o...
research
06/20/2018

InfoCatVAE: Representation Learning with Categorical Variational Autoencoders

This paper describes InfoCatVAE, an extension of the variational autoenc...
research
03/28/2023

Information-Theoretic GAN Compression with Variational Energy-based Model

We propose an information-theoretic knowledge distillation approach for ...
research
07/03/2023

VOLTA: Diverse and Controllable Question-Answer Pair Generation with Variational Mutual Information Maximizing Autoencoder

Previous question-answer pair generation methods aimed to produce fluent...
research
03/09/2022

The Transitive Information Theory and its Application to Deep Generative Models

Paradoxically, a Variational Autoencoder (VAE) could be pushed in two op...

Please sign up or login with your details

Forgot password? Click here to reset