Is Joint Training Better for Deep Auto-Encoders?

05/06/2014
by   Yingbo Zhou, et al.
0

Traditionally, when generative models of data are developed via deep architectures, greedy layer-wise pre-training is employed. In a well-trained model, the lower layer of the architecture models the data distribution conditional upon the hidden variables, while the higher layers model the hidden distribution prior. But due to the greedy scheme of the layerwise training technique, the parameters of lower layers are fixed when training higher layers. This makes it extremely challenging for the model to learn the hidden distribution prior, which in turn leads to a suboptimal model for the data distribution. We therefore investigate joint training of deep autoencoders, where the architecture is viewed as one stack of two or more single-layer autoencoders. A single global reconstruction objective is jointly optimized, such that the objective for the single autoencoders at each layer acts as a local, layer-level regularizer. We empirically evaluate the performance of this joint training scheme and observe that it not only learns a better data model, but also learns better higher layer representations, which highlights its potential for unsupervised feature learning. In addition, we find that the usage of regularizations in the joint training scheme is crucial in achieving good performance. In the supervised setting, joint training also shows superior performance when training deeper models. The joint training framework can thus provide a platform for investigating more efficient usage of different types of regularizers, especially in light of the growing volumes of available unlabeled data.

READ FULL TEXT

page 6

page 11

research
12/07/2012

Layer-wise learning of deep generative models

When using deep, multi-layered architectures to build generative models ...
research
02/27/2023

Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Label-Efficient Representations

Recently, both Contrastive Learning (CL) and Mask Image Modeling (MIM) d...
research
11/28/2016

Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip Connections

Unsupervised pre-training was a critical technique for training deep neu...
research
09/12/2013

Temporal Autoencoding Improves Generative Models of Time Series

Restricted Boltzmann Machines (RBMs) are generative models which can lea...
research
06/18/2012

Deep Mixtures of Factor Analysers

An efficient way to learn deep density models that have many layers of l...
research
06/21/2023

Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference

We view large language models (LLMs) as stochastic language layers in a ...
research
07/26/2021

A Unified Deep Model of Learning from both Data and Queries for Cardinality Estimation

Cardinality estimation is a fundamental problem in database systems. To ...

Please sign up or login with your details

Forgot password? Click here to reset