Training Stacked Denoising Autoencoders for Representation Learning

02/16/2021
by   Jason Liang, et al.
10

We implement stacked denoising autoencoders, a class of neural networks that are capable of learning powerful representations of high dimensional data. We describe stochastic gradient descent for unsupervised training of autoencoders, as well as a novel genetic algorithm based approach that makes use of gradient information. We analyze the performance of both optimization algorithms and also the representation learning ability of the autoencoder when it is trained on standard image classification datasets.

READ FULL TEXT

page 1

page 11

page 12

research
05/10/2016

Decoding Stacked Denoising Autoencoders

Data representation in a stacked denoising autoencoder is investigated. ...
research
11/15/2015

Learning Representations of Affect from Speech

There has been a lot of prior work on representation learning for speech...
research
01/06/2022

The dynamics of representation learning in shallow, non-linear autoencoders

Autoencoders are the simplest neural network for unsupervised learning, ...
research
06/30/2022

Laplacian Autoencoders for Learning Stochastic Representations

Established methods for unsupervised representation learning such as var...
research
05/05/2011

Rapid Feature Learning with Stacked Linear Denoisers

We investigate unsupervised pre-training of deep architectures as featur...
research
06/02/2018

Autoencoders Learn Generative Linear Models

Recent progress in learning theory has led to the emergence of provable ...
research
05/08/2020

A Showcase of the Use of Autoencoders in Feature Learning Applications

Autoencoders are techniques for data representation learning based on ar...

Please sign up or login with your details

Forgot password? Click here to reset