Deep Networks with Stochastic Depth

03/30/2016
by   Gao Huang, et al.
0

Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91

READ FULL TEXT
research
07/22/2015

Training Very Deep Networks

Theoretical and empirical evidence indicates that the depth of neural ne...
research
09/22/2015

Learning Wake-Sleep Recurrent Attention Models

Despite their success, convolutional neural networks are computationally...
research
05/20/2016

Residual Networks Behave Like Ensembles of Relatively Shallow Networks

In this work we propose a novel interpretation of residual networks show...
research
10/15/2015

Layer-Specific Adaptive Learning Rates for Deep Networks

The increasing complexity of deep learning architectures is resulting in...
research
08/11/2016

Faster Training of Very Deep Networks Via p-Norm Gates

A major contributing factor to the recent advances in deep neural networ...
research
06/13/2017

Deep Control - a simple automatic gain control for memory efficient and high performance training of deep convolutional neural networks

Training a deep convolutional neural net typically starts with a random ...
research
06/08/2017

Forward Thinking: Building and Training Neural Networks One Layer at a Time

We present a general framework for training deep neural networks without...

Please sign up or login with your details

Forgot password? Click here to reset