An information theoretic approach to the autoencoder

01/23/2019
by   Vincenzo Crescimanna, et al.
0

We present a variation of the Autoencoder (AE) that explicitly maximizes the mutual information between the input data and the hidden representation. The proposed model, the InfoMax Autoencoder (IMAE), by construction is able to learn a robust representation and good prototypes of the data. IMAE is compared both theoretically and then computationally with the state of the art models: the Denoising and Contractive Autoencoders in the one-hidden layer setting and the Variational Autoencoder in the multi-layer case. Computational experiments are performed with the MNIST and Fashion-MNIST datasets and demonstrate particularly the strong clusterization performance of IMAE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2019

The variational infomax autoencoder

We propose the Variational InfoMax AutoEncoder (VIMAE), a method to trai...
research
12/23/2022

Capacity Studies for a Differential Growing Neural Gas

In 2019 Kerdels and Peters proposed a grid cell model (GCM) based on a D...
research
07/07/2019

Stacked autoencoders based machine learning for noise reduction and signal reconstruction in geophysical data

Autoencoders are neural network formulations where the input and output ...
research
03/28/2023

The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting

Wireless fingerprinting refers to a device identification method leverag...
research
10/08/2019

DEVDAN: Deep Evolving Denoising Autoencoder

The Denoising Autoencoder (DAE) enhances the flexibility of the data str...
research
10/15/2018

Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!

Autoencoders are unsupervised deep learning models used for learning rep...
research
07/19/2018

The Deep Kernelized Autoencoder

Autoencoders learn data representations (codes) in such a way that the i...

Please sign up or login with your details

Forgot password? Click here to reset