What You Expect is NOT What You Get! Questioning Reconstruction/Classification Correlation of Stacked Convolutional Auto-Encoder Features

03/13/2017
by   Michele Alberti, et al.
0

In this paper, we thoroughly investigate the quality of features produced by deep neural network architectures obtained by stacking and convolving Auto-Encoders. In particular, we are interested into the relation of their reconstruction score with their performance on document layout analysis. When using Auto-Encoders, intuitively one could assume that features which are good for reconstruction will also lead to high classification accuracies. However, we prove that this is not always the case. We examine the reconstruction score, training error and the results obtained if we were to use the same features for both input reconstruction and a classification task. We show that the reconstruction score is not a good metric because it is biased by the decoder quality. Furthermore, experimental results suggest that there is no correlation between the reconstruction score and the quality of features for a classification task and that given the network size and configuration it is not possible to make assumptions on its training error magnitude. Therefore we conclude that both, reconstruction score and training error should not be used jointly to evaluate the quality of the features produced by a Stacked Convolutional Auto-Encoders for a classification task. Consequently one should independently investigate the network classification abilities directly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/23/2017

A Pitfall of Unsupervised Pre-Training

The point of this paper is to question typical assumptions in deep learn...
research
04/11/2015

Gradual Training Method for Denoising Auto Encoders

Stacked denoising auto encoders (DAEs) are well known to learn useful de...
research
12/19/2014

Gradual training of deep denoising auto encoders

Stacked denoising auto encoders (DAEs) are well known to learn useful de...
research
11/18/2012

What Regularized Auto-Encoders Learn from the Data Generating Distribution

What do auto-encoders learn about the underlying data generating distrib...
research
05/26/2017

Learning Robust Features with Incremental Auto-Encoders

Automatically learning features, especially robust features, has attract...
research
06/08/2015

Stacked What-Where Auto-encoders

We present a novel architecture, the "stacked what-where auto-encoders" ...
research
04/11/2016

Binarized Neural Networks on the ImageNet Classification Task

We trained Binarized Neural Networks (BNNs) on the high resolution Image...

Please sign up or login with your details

Forgot password? Click here to reset