What is a Contractive Autoencoder?
A contractive autoencoder is an unsupervised deep learning technique that helps a neural network encode unlabeled training data.
Autoencoders in general are used to learn a representation, or encoding, for a set of unlabeled data, usually as the first step towards dimensionality reduction or generating new data models.
How do Contractive Autoencoders Work?
A contractive autoencoder makes this encoding less sensitive to small variations in its training dataset. This is accomplished by adding a regularizer, or penalty term, to whatever cost or objective function the algorithm is trying to minimize. The end result is to reduce the learned representation’s sensitivity towards the training input. This regularizer needs to conform to the Frobenius norm of the Jacobian matrix for the encoder activation sequence, with respect to the input.
Contractive autoencoders are usually employed as just one of several other autoencoder nodes, activating only when other encoding schemes fail to label a data point.
Related Terms:
- Denoising autoencoder
- Sparse autoencoder
- Variational autoencoder (VAE)