Decoupled Greedy Learning of CNNs

01/23/2019
by   Eugene Belilovsky, et al.
0

A commonly cited inefficiency of neural network training by back-propagation is the update locking problem: each layer must wait for the signal to propagate through the network before updating. We consider and analyze a training procedure, Decoupled Greedy Learning (DGL), that addresses this problem more effectively and at scales beyond those of previous solutions. It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification. We consider an optimization of this objective that permits us to decouple the layer training, allowing for layers or modules in networks to be trained with a potentially linear parallelization in layers. We show theoretically and empirically that this approach converges. In addition, we empirically find that it can lead to better generalization than sequential greedy optimization and even standard end-to-end back-propagation. We show that an extension of this approach to asynchronous settings, where modules can operate with large communication delays, is possible with the use of a replay buffer. We demonstrate the effectiveness of DGL on the CIFAR-10 datasets against alternatives and on the large-scale ImageNet dataset, where we are able to effectively train VGG and ResNet-152 models.

READ FULL TEXT
research
06/11/2021

Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning

A commonly cited inefficiency of neural network training using back-prop...
research
10/03/2022

Module-wise Training of Residual Networks via the Minimizing Movement Scheme

Greedy layer-wise or module-wise training of neural networks is compelli...
research
12/29/2018

Greedy Layerwise Learning Can Scale to ImageNet

Shallow supervised 1-hidden layer neural networks have a number of favor...
research
02/13/2022

Beyond NaN: Resiliency of Optimization Layers in The Face of Infeasibility

Prior work has successfully incorporated optimization layers as the last...
research
12/18/2015

Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks

Learning deeper convolutional neural networks becomes a tendency in rece...
research
11/07/2016

Trusting SVM for Piecewise Linear CNNs

We present a novel layerwise optimization algorithm for the learning obj...

Please sign up or login with your details

Forgot password? Click here to reset