Incremental Training of Deep Convolutional Neural Networks

03/27/2018
by   Roxana Istrate, et al.
0

We propose an incremental training method that partitions the original network into sub-networks, which are then gradually incorporated in the running network during the training process. To allow for a smooth dynamic growth of the network, we introduce a look-ahead initialization that outperforms the random initialization. We demonstrate that our incremental approach reaches the reference network baseline accuracy. Additionally, it allows to identify smaller partitions of the original state-of-the-art network, that deliver the same final accuracy, by using only a fraction of the global number of parameters. This allows for a potential speedup of the training time of several factors. We report training results on CIFAR-10 for ResNet and VGGNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2020

Better Together: Resnet-50 accuracy with 13x fewer parameters and at 3x speed

Recent research on compressing deep neural networks has focused on reduc...
research
10/12/2016

Fast Training of Convolutional Neural Networks via Kernel Rescaling

Training deep Convolutional Neural Networks (CNN) is a time consuming ta...
research
10/03/2022

Multipod Convolutional Network

In this paper, we introduce a convolutional network which we call MultiP...
research
07/31/2017

An Effective Training Method For Deep Convolutional Neural Network

In this paper, we propose the nonlinearity generation method to speed up...
research
02/15/2019

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

Deep neural networks are typically highly over-parameterized with prunin...
research
12/19/2019

Multilevel Initialization for Layer-Parallel Deep Neural Network Training

This paper investigates multilevel initialization strategies for trainin...

Please sign up or login with your details

Forgot password? Click here to reset