Gradual DropIn of Layers to Train Very Deep Neural Networks

11/22/2015
by   Leslie N. Smith, et al.
0

We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the new layers for both feedforward and backpropagation. We show that deep networks, which are untrainable with conventional methods, will converge with DropIn layers interspersed in the architecture. In addition, we demonstrate that DropIn provides regularization during training in an analogous way as dropout. Experiments are described with the MNIST dataset and various expanded LeNet architectures, CIFAR-10 dataset with its architecture expanded from 3 to 11 layers, and on the ImageNet dataset with the AlexNet architecture expanded to 13 layers and the VGG 16-layer architecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2019

Shakeout: A New Approach to Regularized Deep Neural Network Training

Recent years have witnessed the success of deep neural networks in deali...
research
04/27/2018

CompNet: Neural networks growing via the compact network morphism

It is often the case that the performance of a neural network can be imp...
research
12/29/2018

Greedy Layerwise Learning Can Scale to ImageNet

Shallow supervised 1-hidden layer neural networks have a number of favor...
research
11/13/2022

Layerwise Sparsifying Training and Sequential Learning Strategy for Neural Architecture Adaptation

This work presents a two-stage framework for progressively developing ne...
research
09/15/2023

Make Deep Networks Shallow Again

Deep neural networks have a good success record and are thus viewed as t...
research
05/20/2016

Swapout: Learning an ensemble of deep architectures

We describe Swapout, a new stochastic training method, that outperforms ...
research
11/24/2016

Survey of Expressivity in Deep Neural Networks

We survey results on neural network expressivity described in "On the Ex...

Please sign up or login with your details

Forgot password? Click here to reset