A Pitfall of Unsupervised Pre-Training

11/23/2017
by   Michele Alberti, et al.
0

The point of this paper is to question typical assumptions in deep learning and suggest alternatives. A particular contribution is to prove that even if a Stacked Convolutional Auto-Encoder is good at reconstructing pictures, it is not necessarily good at discriminating their classes. When using Auto-Encoders, intuitively one assumes that features which are good for reconstruction will also lead to high classification accuracy. Indeed, it became research practice and is a suggested strategy by introductory books. However, we prove that this is not always the case. We thoroughly investigate the quality of features produced by Stacked Convolutional Auto-Encoders when trained to reconstruct their input. In particular, we analyze the relation between the reconstruction and classification capabilities of the network, if we were to use the same features for both tasks. Experimental results suggest that in fact, there is no correlation between the reconstruction score and the quality of features for a classification task. This means, more formally, that the sub-dimension representation space learned from the Stacked Convolutional Auto-Encoder (while being trained for input reconstruction) is not necessarily better separable than the initial input space. Furthermore, we show that the reconstruction error is not a good metric to assess the quality of features, because it is biased by the decoder quality. We do not question the usefulness of pre-training, but we conclude that aiming for the lowest reconstruction error is not necessarily a good idea if afterwards one performs a classification task.

READ FULL TEXT
research
03/13/2017

What You Expect is NOT What You Get! Questioning Reconstruction/Classification Correlation of Stacked Convolutional Auto-Encoder Features

In this paper, we thoroughly investigate the quality of features produce...
research
04/11/2015

Gradual Training Method for Denoising Auto Encoders

Stacked denoising auto encoders (DAEs) are well known to learn useful de...
research
11/01/2018

Unsupervised representation learning using convolutional and stacked auto-encoders: a domain and cross-domain feature space analysis

A feature learning task involves training models that are capable of inf...
research
05/26/2017

Learning Robust Features with Incremental Auto-Encoders

Automatically learning features, especially robust features, has attract...
research
12/19/2014

Gradual training of deep denoising auto encoders

Stacked denoising auto encoders (DAEs) are well known to learn useful de...
research
01/16/2018

Unsupervised Representation Learning with Laplacian Pyramid Auto-encoders

Scale-space representation has been popular in computer vision community...
research
06/08/2015

Stacked What-Where Auto-encoders

We present a novel architecture, the "stacked what-where auto-encoders" ...

Please sign up or login with your details

Forgot password? Click here to reset