What is an Autoencoder?
The autoencoder network has three layers: the input, a hidden layer for encoding, and the output decoding layer. Using backpropagation, the unsupervised algorithm continuously trains itself by setting the target output values to equal the inputs. This forces the smaller hidden encoding layer to use dimensional reduction to eliminate noise and reconstruct the inputs.
How do Autoencoders work?
Autoencoder networks teach themselves how to compress data from the input layer into a shorter code, and then uncompress that code into whatever format best matches the original input. This process sometimes involves multiple autoencoders, such as stacked sparse autoencoder layers used in image processing.
For example, the first autoencoder process will learn to encode easy features like the angles of a roof, while the second analyzes the first layer output to encode less obvious features like a door knob. Then the third encodes an entire door and so on until the final autoencoder encodes the whole image into a code that matches the concept of a “house.”
This can also be used for generative modeling. For example, if a system is manually given the codes it learned for house and flying, it could generate an image of a flying cat, even if it has never processed such an image.
Types of Autoencoders:
1. Denoising autoencoder
- Using a partially corrupted input to learn how to recover the original undistorted input.
2. Sparse autoencoder – These use more hidden encoding layers than inputs, and some use the outputs of the last autoencoder as their input.
3. Variational autoencoder (VAE) - In latent representation learning, an additional loss component is used to approximate the posterior distribution.
4. Contractive autoencoder (CAE)
- This uses an explicit “regularizer” that forces the model to learn a function that is robust against different variations of the input values.