DeepAI AI Chat
Log In Sign Up

An information theoretic approach to the autoencoder

by   Vincenzo Crescimanna, et al.
University of Stirling

We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.


page 1

page 2

page 3

page 4


The variational infomax autoencoder

We propose the Variational InfoMax AutoEncoder (VIMAE), a method to trai...

Capacity Studies for a Differential Growing Neural Gas

In 2019 Kerdels and Peters proposed a grid cell model (GCM) based on a D...

Stacked autoencoders based machine learning for noise reduction and signal reconstruction in geophysical data

Autoencoders are neural network formulations where the input and output ...

The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting

Wireless fingerprinting refers to a device identification method leverag...

DEVDAN: Deep Evolving Denoising Autoencoder

The Denoising Autoencoder (DAE) enhances the flexibility of the data str...

Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!

Autoencoders are unsupervised deep learning models used for learning rep...

The Deep Kernelized Autoencoder

Autoencoders learn data representations (codes) in such a way that the i...