Layer-Parallel Training of Deep Residual Neural Networks

12/11/2018
by   S. Günther, et al.
0

Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers by a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that our approach achieves similar training performance to traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.

READ FULL TEXT

page 18

page 19

research
12/19/2019

Multilevel Initialization for Layer-Parallel Deep Neural Network Training

This paper investigates multilevel initialization strategies for trainin...
research
07/14/2020

Layer-Parallel Training with GPU Concurrency of Deep Residual Neural Networks via Nonlinear Multigrid

A Multigrid Full Approximation Storage algorithm for solving Deep Residu...
research
04/13/2020

Multilevel Minimization for Deep Residual Networks

We present a new multilevel minimization framework for the training of d...
research
09/03/2020

Penalty and Augmented Lagrangian Methods for Layer-parallel Training of Residual Networks

Algorithms for training residual networks (ResNets) typically require fo...
research
03/06/2017

Learning across scales - A multiscale method for Convolution Neural Networks

In this work we establish the relation between optimal control and train...
research
09/15/2023

Make Deep Networks Shallow Again

Deep neural networks have a good success record and are thus viewed as t...
research
01/31/2022

Imbedding Deep Neural Networks

Continuous depth neural networks, such as Neural ODEs, have refashioned ...

Please sign up or login with your details

Forgot password? Click here to reset