Auto-encoders: reconstruction versus compression

03/30/2014
by   Yann Ollivier, et al.
0

We discuss the similarities and differences between training an auto-encoder to minimize the reconstruction error, and training the same auto-encoder to compress the data via a generative model. Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms which have an interpretation as either a denoising or contractive property of the decoding function. These terms are related but not identical to those used in denoising or contractive auto-encoders [Vincent et al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully determines an optimal noise level for the denoising criterion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2012

What Regularized Auto-Encoders Learn from the Data Generating Distribution

What do auto-encoders learn about the underlying data generating distrib...
research
06/09/2020

Probabilistic Auto-Encoder

We introduce the Probabilistic Auto-Encoder (PAE), a generative model wi...
research
06/08/2015

Stacked What-Where Auto-encoders

We present a novel architecture, the "stacked what-where auto-encoders" ...
research
11/30/2019

Disentanglement Challenge: From Regularization to Reconstruction

The challenge of learning disentangled representation has recently attra...
research
11/15/2022

An FNet based Auto Encoder for Long Sequence News Story Generation

In this paper, we design an auto encoder based off of Google's FNet Arch...
research
10/08/2017

Reconstruction of Hidden Representation for Robust Feature Extraction

This paper aims to develop a new and robust approach to feature represen...
research
02/10/2016

A Theory of Generative ConvNet

We show that a generative random field model, which we call generative C...

Please sign up or login with your details

Forgot password? Click here to reset