A Generative Process for Sampling Contractive Auto-Encoders

06/27/2012
by   Salah Rifai, et al.
0

The contractive auto-encoder learns a representation of the input data that captures the local manifold structure around each data point, through the leading singular vectors of the Jacobian of the transformation from input to representation. The corresponding singular values specify how much local variation is plausible in directions associated with the corresponding singular vectors, while remaining in a high-density region of the input space. This paper proposes a procedure for generating samples that are consistent with the local structure captured by a contractive auto-encoder. The associated stochastic process defines a distribution from which one can sample, and which experimentally appears to converge quickly and mix well between modes, compared to Restricted Boltzmann Machines and Deep Belief Networks. The intuitions behind this procedure can also be used to train the second layer of contraction that pools lower-level features and learns to be invariant to the local directions of variation discovered in the first layer. We show that this can help learn and represent invariances present in the data and improve classification error.

READ FULL TEXT
research
04/21/2011

Learning invariant features through local space contraction

We present in this paper a novel approach for training deterministic aut...
research
11/18/2012

What Regularized Auto-Encoders Learn from the Data Generating Distribution

What do auto-encoders learn about the underlying data generating distrib...
research
06/30/2012

Implicit Density Estimation by Local Moment Matching to Sample from Auto-Encoders

Recent work suggests that some auto-encoder variants do a good job of ca...
research
07/29/2014

How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation

We propose to exploit reconstruction as a layer-local training signal f...
research
12/20/2014

Scoring and Classifying with Gated Auto-encoders

Auto-encoders are perhaps the best-known non-probabilistic methods for r...
research
01/16/2018

Unsupervised Representation Learning with Laplacian Pyramid Auto-encoders

Scale-space representation has been popular in computer vision community...
research
03/05/2018

Thermodynamics of Restricted Boltzmann Machines and related learning dynamics

We analyze the learning process of the restricted Boltzmann machine (RBM...

Please sign up or login with your details

Forgot password? Click here to reset