Gradual Training Method for Denoising Auto Encoders

04/11/2015
by   Alexander Kalmanovich, et al.
0

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network. We investigate a training scheme of a deep DAE, where DAE layers are gradually added and keep adapting as additional layers are added. We show that in the regime of mid-sized datasets, this gradual training provides a small but consistent improvement over stacked training in both reconstruction quality and classification error over stacked training on MNIST and CIFAR datasets.

READ FULL TEXT

page 1

page 2

research
12/19/2014

Gradual training of deep denoising auto encoders

Stacked denoising auto encoders (DAEs) are well known to learn useful de...
research
11/19/2015

Predicting online user behaviour using deep learning algorithms

We propose a robust classifier to predict buying intentions based on use...
research
03/13/2017

What You Expect is NOT What You Get! Questioning Reconstruction/Classification Correlation of Stacked Convolutional Auto-Encoder Features

In this paper, we thoroughly investigate the quality of features produce...
research
11/23/2017

A Pitfall of Unsupervised Pre-Training

The point of this paper is to question typical assumptions in deep learn...
research
06/08/2015

Stacked What-Where Auto-encoders

We present a novel architecture, the "stacked what-where auto-encoders" ...
research
07/18/2017

A Novel Deep Learning Architecture for Testis Histology Image Classification

Unlike other histology analysis, classification of tubule status in test...
research
07/06/2023

DSARSR: Deep Stacked Auto-encoders Enhanced Robust Speaker Recognition

Speaker recognition is a biometric modality that utilizes the speaker's ...

Please sign up or login with your details

Forgot password? Click here to reset