The abstract goal of machine learning is to model a function and train its parameter such that
for input-output pairs from a certain data set
. Depending on the nature of inputs and outputs, the task can be regression or classification. When outputs are available for all samples, parts of the samples, or are not available, this formulation describes supervised, semi-supervised, and unsupervised learning, respectively. The function
can be thought of as an interpolation or approximation function.
In deep learning, the function involves a DNN that aims at transforming the input data using many layers. The layers successively apply affine transformations and element-wise nonlinearities that are parametrized by the network parameters . The training problem consists of finding the parameters such that (1.1) is satisfied for data elements from a training data set, but also holds for previously unseen data from a validation data set, which has not been used during training. The former objective is commonly modeled as an expected loss and optimization techniques are used to find the parameters that minimize the loss.
Despite rapid methodological developments, compute times for training state-of-the-art DNNs can still be prohibitive, measured in the order of hours or days, involving hundreds or even thousands of layers and millions or billions of network parameters [16, 40]. There is thus a great interest in increasing parallelism to reduce training runtimes. The most common approach involves data-parallelism, where elements of the training data set are distributed onto multiple compute units. Synchronous and asynchronous data-parallel training algorithms have been developed to coordinate the network parameter updates [37, 1]. Another approach is referred to as model-parallelism, which aims at partitioning different layers of the network and its parameters to different compute units. Model parallelism has traditionally been used when the network dimension exceeds available memory of a single compute unit. Often, a combination of both approaches is employed [33, 16].
However, none of the above approaches to parallelism tackle the scalability barrier created by the intrinsically serial propagation of data through the network itself. In either of the above approaches, each subsequent layer can process accurate information only after the previous layer has finished its computation. As a result, training runtimes typically scale linearly with the number of layers. As current state-of-the-art networks tend to increase complexity by adding more and more layers (see, e.g., the ResNet-1001 with 1001 layers and 10.2 million weights in ), the serial layer propagation creates a serious bottleneck for fast and scalable training algorithms seeking to leverage modern HPC facilities.
In this paper, we address the above scalability barrier by introducing concurrency across the network layers. To this end, we replace the serial data propagation through the network layers by a nonlinear multigrid method that treats layers, or layer chunks, simultaneously and thus enables full layer-parallelism. Our goal is to have a training methodology that is scalable in the number of layers, e.g., doubling the number of layers and the number of compute resources should result in a near constant runtime. To achieve this, we leverage recent advances in parallel-in-time integration methods for unsteady differential equations .
The forward propagation through a ResNet can be seen as a discretization of a time-dependent ordinary differential equation (ODE), which was first observed in[31, 18, 32]. Interpreting the network propagation as a nonlinear dynamical system has since attracted increasing attention (see, e.g.  and references therein, or ).
Based on this interpretation, we employ a multigrid reduction in time approach 
that divides the time domain – which, in this interpretation, corresponds to the layer domain – into multiple time chunks that can be processed in parallel on different compute units. Coupling of the chunks is achieved through a coarse-grid correction scheme that propagates information across chunk interfaces on a coarser time- (i.e. layer-) grid. The method can be seen as a parallelization of the model, however we employ full model-parallelism on the algorithmic level, processing layer chunks simultaneously within the iterative multigrid scheme thus breaking the traditional layer-serial propagation. At convergence, the iterative multigrid scheme solves the same problem as a layer-serial method and it can thus be utilized in any common gradient-based optimization technique to update the network parameters, such as the stochastic gradient descent (SGD) or other batch approaches, without loss of accuracy. Further, it can be applied in addition to any data-parallelism across the data set elements, thus multiplying data-parallel runtime speedup. Runtime speedup over traditional layer-serial methods is achieved through the new dimension of parallelization across layers enabling greater concurrency.
The addition of layer-parallelism allows for ResNets to take advantage of large machines currently programmed with message-passing style parallelism. The use of such large machines in conjunction with a multilevel training algorithm scalable in the number of layers, opens the door to training networks with thousands, or possibly even millions of layers. We demonstrate the feasibility of such an approach by using the parallel-in-time package XBraid  with a ResNet on large clusters. Additionally, the non-intrusive approach of XBraid would allow for any node-level optimizations (such as those utilizing GPUs) to be used.
The iterative nature of the multigrid approach further enables the use of simultaneous optimization algorithms for training the network. Simultaneous optimization methods have been widely used for PDE-constrained optimization, where they show promise for reducing the runtime overhead of the optimization when compared to a pure simulation of the underlying PDE (see e.g. [7, 51, 5] and references therein). They aim at solving the optimization problem in an all-at-once fashion, updating the optimization parameters simultaneously while solving for the time-dependent system state. Here, we apply the One-shot method [9, 30] to solve the training problem simultaneously for the network state and parameters. In this approach, network parameter updates are based on inexact gradient information resulting from early stopping of the layer-parallel multigrid iteration.
The paper is structured as follows: Section 2 gives an introduction to the deep learning optimization problem and its interpretation as an optimal control problem. Further, it discusses numerical discretization of the optimal control problem and summarizes necessary conditions for optimality. We then introduce the layer-parallel multigrid approach replacing the forward and backward propagation through the network in Section 3. Section 4 focuses on the integration of the layer-parallel multigrid scheme into a simultaneous optimization algorithm. Numerical results demonstrating the feasibility and runtime benefits of the proposed layer-parallel scheme are presented in Section 5.
2 Deep Learning as a Dynamic Optimal Control Problem
In this section, we present an optimal control formulation of a supervised classification problem using a deep residual network. Limiting the discussion to this specific task allows us to provide a self-contained mathematical description. We note that the layer-parallel approach can be extended to other learning tasks, e.g., semi-supervised learning, auto-regression, or recurrent learning and refer to[27, 2] for a general introductions and a comprehensive overview of deep learning techniques.
2.1 Optimal Control Formulation
In supervised classification, the given data set consists of feature, or example, vectors
and associated class probability vectors, where denotes the unit simplex in . The -th component of represents the probability of example belonging to the
-th class. The learning problem aims at training a network, and its classifier, that approximate the feature-to-class mapping for all data elements.
A powerful class of networks are Residual Neural Networks (ResNets) . In an abstract form, the network transformation to a generic input data example using an -layer ResNet can be written as
The transformations in typically consist of affine linear and element-wise nonlinear transformations that are parameterized by the entries in the layer weights
, respectively. We consider the single layer perceptron model
is a nonlinear activation function that is applied component-wise, e.g.,or . Here, the weight vectors are partitioned into one part that defines a linear operator and another part that represents coefficients of a bias with respect to columns of the given matrices . In this work, we assume that the linear operators are either dense matrices or correspond to convolutional operators (see ) parametrized by , whose entries we determine in the training. However, our method can be extended to other parameterizations (e.g., antisymmetric matrices to enforce stability ). While needs to be a square matrix we use a non-square model for the operator to embed the dataset in a higher dimensional space.
In this formulation, is an artificial time that refers to the propagation of the input features through the neural network, starting from the input layer to the network output being the solution of the initial value problem evaluated at some final time . In contrast to the discrete ResNet propagation, the dynamical system continuously transforms the network state by prescribing its time-derivative with the vector field , whose parameters will be learned during training.
In order to classify the network output into a specific class, a hypothesis function is required that predicts the class probabilities. Here, we limit ourselves to multinomial regression models, which are common in deep learning. To this end, we consider the softmax hypothesis function given by
where is a vector of all ones, is the exponential function applied element-wise, and ,
denote a weight matrix and bias vector, whose enties need to be learned in training alongside the network parameters.
The performance of the network transformation and classification can then be measured by comparing the predicted class probabilities to the given ones in . To this end, we use the cross entropy loss function
cross entropy loss function
The training problem consists of minimizing the average cross entropy loss function over many examples with respect to , and . It can thus be cast as the following continuous-in-time optimal control problem:
The objective function consists of the empirical cross entropy loss function, and an additional regularization term denoted by . In conventional deep learning approaches, typically applies a Tikhonov regularization penalizing “large” network and classification parameters, measured in a chosen norm. Within the time-continuous optimal control interpretation, we additionally penalize the time-derivative of in order to ensure weights that vary smoothly in time. This is an important ingredient for stability analysis .
2.2 Discretization of the Optimal Control Problem
We solve the continuous-in-time optimal control problem in a first-discretize-then-optimize fashion. For simplicity, we discretize the control and the states at regularly spaced time-points , where and . In this setting, each discrete state and control correspond to the -th layer of the network. This leads to the discrete control problem
In this general description, can denote any layer-to-layer propagator which maps data to the next layer. In case of a forward Euler discretization, it reads
giving the ResNet propagation as in (2.2). However, the time-continuous interpretation of the network propagation permits to employ other, possibly more stable discretization schemes of the initial value problem (2.4), see , and thus opens the door for new network architecture designs. It also allows for discretization of the controls and states at different time-points, which can improve the efficiency and is the subject of further research. Further, numerical advances for solving the corresponding optimal control can be leveraged, such as the time-parallel approach which is discussed in this paper.
2.3 Necessary Optimality Conditions
where denotes the objective function in (2.10) and are called adjoint variables for layer and example . Optimal points of the problem are saddle points of the Lagrangian function (see e.g. ), thus equating its partial derivatives with respect to all state, adjoint and control variables to zero yields the following necessary conditions for optimality:
Here, subscripts denote partial derivatives, . Training a residual network corresponds to the attempt of solving the above set of equations for the special choice of being the forward Euler time-integration scheme. However, the above equations, as well as the discussions in the remainder of this paper, are general, in the sense that any other layer-to-layer propagator can be utilized that corresponds to the discretization of the dynamical system (2.4).
The state equations correspond to the forward propagation of input examples through the network layers. The adjoint equations propagate partial derivatives with respect to the network states backwards through the network layers, starting from a terminal condition at equal to the local derivative of the loss function. In a time-continuous setting, the adjoint equations correspond to the discretization of an additional adjoint dynamical system for propagating network state derivatives backwards in time.111The adjoint approach is a common and well-established method in optimal control that provides gradient information at computational costs that are independent of the design space dimension, see e.g.  We note that solving the above adjoint equations backwards in time is equivalent to the backpropagation method that is established within the deep learning community for computing the network gradient . It further corresponds to the reverse mode of automatic differentiation . The adjoint variables are utilized in the right-hand-side of the design equations, which then form the so-called reduced gradient. For feasible state and adjoint variables, the reduced gradient holds the total derivative, i.e. the sensitivity, of the objective function with respect to the controls. It is thus used within gradient-based optimization methods for updating the network controls.
3 Layer-Parallel Multigrid Approach
In order to achieve concurrency across all the network layers, we replace the sequential propagation through the residual network (forward and backward) with an iterative multigrid scheme.
Based on the time-continuous nonlinear ODE interpretation of ResNets as in (2.4), and its time-discretization as in (2.11)–(2.12), we employ the multigrid reduction in time (MGRIT)  method to parallelize across the time domain of the network. While the discussion in this section revolves around time-grids, here each time-point is considered a layer in the network. Thus, the multigrid approach constructs a multilevel hierarchy, where each level is a network containing fewer layers (i.e., fewer time-points). The coarsest level will contain only a handful of layers, while the finest level could contain thousands (or more) of layers. When run in parallel, each compute unit will own only a few layers, thus allowing for massive parallelism to be applied to the learning algorithm.
, although that work considered parallelism over epochs of the training algorithm, not layers. We refer to the works[19, 20, 29] for the details of the method, but we will here provide a self-contained overview of the MGRIT scheme.
3.1 Multigrid Across Layers for Forward Propagation
where each block row corresponds to a time-step, which in turn corresponds to a layer in the network. Here, the denote the network states at each time step for either a single generic input vector or for a batch, i.e. a subset, of input vectors .
Sequential time stepping solves (3.21) through forward substitution, i.e. forward propagation of input data through the network layers. In contrast, MGRIT solves (3.21) iteratively, beginning with some initial solution guess for , by using the Full Approximation Storage (FAS) nonlinear multigrid method , see Section 3.1.1. In both cases, the exact same equations are solved and thus the same solution is reached (in the case of MGRIT, to within a user tolerance). Regarding cost, sequential time-stepping is , but sequential. Instead, MGRIT solves this system with an multigrid method with a larger computational constant, but with parallelism in the layer dimension. This parallelism allows for a distributed workload, processing multiple layers in parallel on multiple compute units. Typically, a certain number of processors are needed for MGRIT to show a speedup over layer-serial forward propagation. This is referred to as the cross-over point. However, the speedups observed can be large, e.g., the work  showed a speedup of 19x for a model optimization problem while using an additional 256 processors in time.
3.1.1 MGRIT using Full Approximation Scheme (FAS)
Similar to linear multigrid methods, the nonlinear FAS method computes coarse-grid error corrections to fine-grid approximations of the solution. Each iteration of the nonlinear MGRIT scheme consists of three steps: First, a relaxation scheme is employed to cheaply compute an approximation to the true solution on the fine-grid. Then, the current error is approximated on a coarser grid by solving a coarse-grid residual equation. Lastly, the interpolated coarse-grid error approximation is used to correct the current fine-grid solution approximation. This idea is based on the fact that low frequency error components can be reduced with relaxation much faster on coarser grids. While a general introduction to linear and nonlinear multigrid methods can be found in , we explain here each of the algorithmic components of MGRIT, starting with the coarse-grid residual equation.
Let denote an approximation to the true solution of (3.21) such that with denoting the current error. Then this error can be expressed in terms of the residual as
In a multigrid setting, this residual equation (3.23) is solved on a coarser grid such that an approximation to the error can be computed more cheaply than on the fine-grid. In linear cases, i.e. when is linear in , the residual equation reduces to and can thus be solved for the error directly. In the nonlinear case, the residual equation is solved for on the coarse grid before the error can be extracted with .
For a given time-grid discretization and , the coarse grid is defined by choosing a coarsening factor and assigning every -th time-point to the next coarser time-grid with , and coarse-grid spacing . An example of two grid levels using a coarsening factor of is given in Figure 1. The residual , as well as the current approximation and controls , are restricted to the coarse grid with injection by choosing every -th time-point, i.e., the restriction of is
with , defined analogously. Consequently, the residual equation that is to be solved on the coarse grid reads
Here, denotes a re-discretization of on the coarse grid utilizing a coarse-grid propagator , i.e.,
An obvious choice for is a re-discretization of the problem on the coarse grid, such as by using the same propagator as on the fine grid, but with a bigger time step , thus skipping the fine-grid time-points and updating only the coarse-grid points. For instance, could be a forward or backward Euler discretization with time-step size , and could be a forward or backward Euler discretization with time-step size . In the case of forward Euler (i.e., ResNet architecture), the coarse-grid propagator is given by
On the coarsest grid, the residual equation (3.25) is solved exactly with forward substitution. Afterwards, the error approximation on the coarse grid is extracted with . This coarse-grid error approximation is then used to correct the fine-grid approximation at coarse-grid points with .
Complementing the coarse time-grid error correction is the fine-grid relaxation process. Here, block Jacobi relaxation alternates between the fine-grid and the coarse-grid points. More precisely, relaxation on the fine-points (called F-relaxation) corresponds to updating each fine-point concurrently over each time chunk interval, thus propagating each coarse-point value through the corresponding fine-point interval as in
|Importantly, each -th interval of fine-points can be computed independently, in parallel. Relaxation on the coarse-points (called C-relaxation) is analogous, and updates each coarse-point concurrently by propagating the nearest left neighboring value. For the -th coarse-point, the update is given by|
The actions of F- and C- relaxation are depicted in Figure 2. Unless otherwise noted, we use FCF-relaxation, which is an application of F-relaxation (3.28), followed an application of C-relaxation (3.29), and then F-relaxation (3.28) again. We note that such F/C orderings in relaxation are common for multigrid methods.
Taken together, the coarse-grid error correction and the fine-grid relaxation form the two-grid MGRIT cycle depicted in Algorithm 1.222The F-relaxation two-grid version of nonlinear MGRIT is equivalent to the Parareal algorithm . Typically, the MGRIT Algorithm 1 is carried out recursively, with successively coarser time-grids, until a coarsest time-grid of trivial size is reached, and Step 3 is solved exactly using forward substitution. If the levels are traversed in order, going down to the coarsest time-grid and then back to the finest time-grid, this is called a V-cycle. It corresponds to the “Solve” in Step 3 being implemented as a single recursive call. However, more powerful cycles can be applied that visit coarse time-grids more frequently (such as F-cycles, see, e.g., [48, 12] for more information on multigrid cycling).
Note that the main work carried out on a given time-grid is the parallel relaxation process. Thus the work on each MGRIT level is highly parallel. Only when a coarsest time-grid of trivial size is reached, is the level solved sequentially by forward substitution. Thus, the algorithm simultaneously computes all time-steps in parallel, reducing the serial propagation component to the size of the coarsest grid plus the traversal through each level.
The MGRIT iterator has been shown to be a contraction in many settings for linear, nonlinear, parabolic, and hyperbolic problems, although hyperbolic problems tend to be more difficult (e.g., [17, 19, 30, 21]). Upon convergence, the limit fixed-point will satisfy the discrete network state equations as in (2.15)–(2.16), since MGRIT solves the same underlying problem.
Before starting the multigrid iterations, an initial solution guess for
must be set. Typically, the coarse-grid points are initialized using the best current solution estimate. This is often either some generic initial condition, or an interpolated solution from a cheaper coarser time-grid.
3.2 Multigrid Across Layers for Backpropagation
The same nonlinear multigrid scheme as described in Section 3.1 can also be utilized to solve the adjoint equations (2.17) – (2.18) in layer-parallel. The adjoint equations are linear in the adjoint variables , and those are propagated backwards through the network. The adjoint space-time system thus reads
where again denotes the adjoint variable at layer for a general example or for a batch of examples . Further, denotes the partial derivative . It corresponds to the backwards layer-propagation of adjoint sensitivities which in the case of a forward Euler discretization for (i.e., ResNet architecture), reads
Each backward propagator at layer depends on the primal state , hence the system matrix and right-hand-side of (3.31) depend on the current state which is reflected in the subscript and . The structure of the adjoint system (3.31), however, is the same as that of the state system (3.21). Hence the same MGRIT approach as presented in Algorithm 1 can be utilized to solve the adjoint equations with the layer-parallel multigrid scheme by applying the following iteration
for the adjoint vector .
The adjoint equations depend on the primal states . Therefore, those states need to be either stored during forward propagation, or recomputed while solving the adjoint equations. Hybrid approaches like the check-pointing method have been developed, which compromise memory consumption with computational complexity (see, e.g., ). Memory-free methods using reversible networks were first proposed for general dynamics in . However, as shown in , not all architectures that are reversible algebraically are forward and backward stable numerically. This motivates limiting the forward propagation to stable dynamics, e.g., inspired by hyperbolic systems.
3.3 Non-intrusive implementation
The MGRIT algorithm relies on the action of the layer-to-layer forward and backward propagators, and , and their respective rediscretizations, and , on coarser grid levels. However, it does not access or “know” the internals of these functions. Hence, MGRIT can be applied in a fully non-intrusive way with respect to any existing discretization of the nonlinear dynamics describing the network forward and backward propagation. A user can wrap existing sequential evolution operators according to an MGRIT software interface, and then the MGRIT code iteratively computes the solution to (2.15)–(2.16) and (2.17)–(2.18) in parallel.
Our chosen MGRIT implementation for time-parallel computations (forwards and backwards) is XBraid . One particular advantage of XBraid is its generic and flexible user-interface that requires relatively straight-forward user-routines which likely already exist, such as how to take inner-products and norms with vectors , how to take a time-step with and , etc.
Since the user defines the action of , any existing implementation of layer computations can continue to be used, including accelerator code, e.g., for GPUs. However since takes a single time-step, any use of GPU kernels for implies memory movement to and from the CPU every time-step. This is because current architectures largely rely on the CPU to handle the message passing layer of parallelism, and it is over this layer that XBraid provides temporal parallelism. However, future implementations could move the message passing layer to occur solely on the GPU, thus removing this memory movement overhead. Additionally, the bandwidth and latency between CPUs and accelerators will continue to improve, also ameliorating this issue.
The state and adjoint MGRIT iterations recover at convergence the same reduced gradient as a layer-serial forward- and backpropagation through the network. They can thus be integrated into any gradient-based training algorithm for updating the network control parameters . Sub-gradient methods, such as SGD or other batch approaches, can also be utilized by choosing the corresponding subset . Regarding speedup and parallelism, the layer-parallel computations are particularly attractive in the small-batch mode when options for data parallelism are limited. Overall, we expect a runtime speedup over a layer-serial approach for deep networks through the greater concurrency within the state and adjoint solves, when the computational resources are large enough.
4 Simultaneous Layer-Parallel Training
The iterative nature of the layer-parallel multigrid scheme allows for a simultaneous training approach that solves the network state and adjoint equations inexactly during training. To this end, we reduce the accuracy of the state and adjoint MGRIT solver during training and update the network control parameters utilizing inexact gradient information. This corresponds to an early stopping of the MGRIT iterations in each outer optimization cycle. The theoretical background of this early-stopping approach of the inner state and adjoint fixed-point iterations is based on the One-shot method , which has been successful for reducing runtimes of many PDE-constrained optimization problems in aerodynamics applications (e.g. [38, 24, 8]).
The simultaneous layer-parallel training approach is summarized in Algorithm 2. To clarify the details of the method, the following points need to be considered:
Number of state and adjoint updates : For “large” , the algorithm recovers the same gradient, and hence the same scheme as a conventional layer-serial gradient-based training approach - however with the addition of enabled layer-pallelism, providing runtime reductions through greater concurrency. Considering smaller numbers of inner MGRIT iterations, e.g. , further reduces the runtime of each iteration and yields the simultaneous optimization approach. In that case, control parameter updates in Step 5 are based on inexact gradient information utilizing the most recent state and adjoint variables and .
For the extreme case , the resulting optimization iteration can mathematically be interpreted as an approximate, reduced sequential quadratic programming (rSQP) method with convergence analysis presented in . In , theoretical considerations on the choice of are presented, which rely on the state and adjoint residuals by searching for descent on an augmented Lagrangian function. In practice, choosing to be as small as has proven successful in our experience.
Hessian approximation: In order to prove convergence of the simultaneous One-shot method on a theoretical level, the preconditioners should approximate the Hessian of an augmented Lagrangian function that involves the residual of the state and adjoint equations (see 
and references therein). Numerically, we approximate the Hessian through consecutive limited-memory BFGS updates based on the current reduced gradient (thus assuming that the residual term is small). Alternatively, one might try to approximate the Hessian with a scaled identity matrix, which drastically reduces computational complexity and has already proven successful in various applications of the One-shot method. It should be noted, that the Hessian with respect tocan be computed directly as it involves only the second derivative of the loss function in (2.6) and the regularizer .
Stepsize selection: The stepsize is selected through a standard line-search procedure based on the current value of the objective function, e.g. a backtracking line-search satisfying the (strong) Wolfe-condition (see, e.g., ).
Stopping criterion: Since the One-shot method targets optimality and feasibility of the state, adjoint and control variables simultaneously, the stopping criterion should involve not only the norm of the reduced gradient, but also the norm of the state and adjoint residuals. In the context of network training, however, solving the optimization problem to high accuracy is typically not desired in order to prevent overfitting. We therefor comput a validation accuracy in each iteration of the above algorithm by applying the current network controls to a separate validation data set. We terminate the training, if the current network controls produce a high validation accuracy, rather than focusing on the current residuals of the state, adjoint and gradient norms.
5 Numerical Results
We investigate the computational benefits of the simultaneous layer-parallel training approach on three test cases. For all test cases, our focus is on the ability to achieve speedup in training runtimes for very deep neural networks by introducing parallelism between the layers. It is likely, though not explored here, that greater combined speedups are possible by additionally using data-parallelism or parallelizing inside of each layer. Further studies are required to better understand the trade-off of distributing parallel work between layer-parallel and data-parallel.
5.1 Test Cases
Level set classification (Peaks example):
As a first step, we consider the test problem suggested in  for classifying grid points into five (non-convex) level sets of a smooth nonlinear function (Figure 2(a)). The training data set consists of randomly chosen points and unit vectors which represent the probability that a point belongs to level set . The goal is to train a network that predicts the correct level sets for new points in (validation points).
We choose a ResNet architecture with smooth ReLU activation (i.e., smoothed around zero) and define the linear operations at each layer to be a dense matrix representation of the weights . We choose a network depth of discretized with up to layers and a network width of such that .
Hyperspectral image segmentation (Indian Pines):
In this test case, we consider a soil segmentation problem based on a hyperspectral image data set. The input data consists of hypersectral bands over a single landscape in Indiana, US, (Indian Pines data set ) with pixels. For each pixel, the data set contains spectral reflectance bands which represent different portions of the electromagnetic spectrum in the wavelength range . The goal is to train a network that assigns each pixel of the scene to one of class labels that represent the type of land-cover present at that pixel (such as alfalfa, corn, soybean, wheat, etc.), see Figure 2(b).
We use the spectral bands of randomly chosen pixel points, , together with their corresponding class probability vectors (unit vectors) for training. The network architecture is a ResNet with smoothed ReLU activation (i.e. , smoothed around zero) and define the linear operations at each layer to be a dense matrix representation of the weights . We choose a network depth of discretized with up to layers and a network width of channels, corresponding to the 220 reflectance bands.
MNIST image classification (MNIST):
As a final example, we consider the now classic MNIST  test case for classification of handwritten digits encoded in a grey scale image (Figure 2(c)). Our objective for this test case is to demonstrate the scalability of the layer-parallel approach over an increasing number of layers. While we obtain reasonable validation accuracy, the objective is not to develop an optimal ResNet to solve this problem. Further, we obtained the timings below with our own straight-forward implementation of convolutions, to ensure compatible layer-to-layer propagators with XBraid for our initial tests. Future work will use a fast convolution library, which will provide a substantial speedup to both the serial and layer-parallel codes.
We use a ResNet architecture with activation and define internal layers by the linear operator using convolution kernels of width
. This yields a weight tensor at each layer of size. The parameters to be trained are at each layer. The network is defined to have a depth of and is discretized with up to layers.
Figure 3: Classes of the Peaks example (test case 1), sample band and true classes of the Indian Pines data set (test case 2), and examples from the MNIST data set (test case 3).
The Peaks and Indian Pines computations were performed on the RHRK cluster Elwetritsch II at TU Kaiserslautern. Elwetritsch II has 485 nodes based on Haswell (2x8 cores, 64GB) and Skylake (2x12 cores, 96GB) architectures. The computations for the MNIST results were performed on the Skybridge capacity cluster at Sandia National Laboratories. Skybridge is a Cray containing 1848 nodes with two 8 core Intel 2.6 GHz Sandy Bridge processors, 64GB of RAM per node and an Infiniband interconnect.
5.2 Layer-Parallel Scaling and Performance Validation
First, we investigate runtime scaling results of the layer-parallel MGRIT propagation for one single objective function and gradient evaluation, and compare it to conventional serial-in-layer forward- and backpropagation. Here, we keep the network weights fixed and propagate a batch of examples of sizes for the Peaks, Indian Pines and MNIST test case, respectively, through the network. We choose a coarsening factor of to set up a hierarchy of ever coarser layer-grids to employ the multigrid scheme.
Figure 4 shows the convergence history of the MGRIT iterations for two different problem sizes using and layers. We monitor the relative drop of the state and adjoint residual norms and observe fast convergence for all test cases that is independent of the number of layers.
Figure 5 presents a weak-scaling study for the layer-parallel MGRIT scheme. Here, we double the number of layers as well as the number of compute cores while keeping the ratio fixed, such that each compute unit processes layers. Runtimes are measured for one objective function and gradient evaluation, using a relative stopping criterion of orders of magnitude for the MGRIT residual norms. Note, that the layer-serial data points have been added for comparison, even though they are executed on only one core. For the layer-serial propagation, doubling the number of layers leads to a doubling in runtime. The layer-parallel MGRIT approach however yields nearly constant runtimes independent of the problem size. The resulting speedups are reported in Table 1. Since the layer-parallel MGRIT approach removes the linear runtime scale of the conventional serial-layer propagation, resulting speedups increase linearly with the problem size yielding up to a factor of x for the MNIST case using layers and cores. Further speedup can be expected when considering ever more layers (and computational resources).
A strong scaling study is presented Figure 6 for various numbers of layers. Here, we keep the problem sizes fixed and measure the time-to-solution for one gradient evaluation with MGRIT for increasing numbers of computational resources. It shows good strong scaling behavior for all test cases, independent of the numbers of layers. The cross over point where the layer-parallel MGRIT approach shows speedup over the layer-serial propagation is around cores for all cases (compare with the layer-serial runtimes from Table 1).
5.3 Simultaneous Layer-Parallel Training Validation
Next, we investigate the simultaneous layer-parallel training, using layer-parallel MGRIT iterations in each outer training iteration (see Algorithm 2). We compare runtimes of the simultaneous layer-parallel training with a conventional layer-serial training approach, while choosing the same preconditioning Hessian approximation (L-BFGS), as well as the same initial network parameters for both approaches. However, we tune the optimization hyper-parameters (such as regularization parameters, stepsize selection, etc.) separately for both schemes, in order to find the best setting for either approach that reaches a prescribed validation accuracy with the least iterations and minimum runtime.
For the Peaks example, we train a network with layers distributed onto compute cores, and for the Indian Pines data set and the MNIST case we choose layers distributed onto compute cores, giving layers per processor in all cases. Figure 7 plots the training history over iteration counts (top) as well as runtime (bottom). We validate from the top figures, that both approaches reach comparable performance in terms of training result (optimization iteration counts, training loss and validation accuracy). Hence, reducing the accuracy of the inner multigrid iterations for solving the state and adjoint equations within a simultaneous training framework does not deteriorate the training behavior. But, each iteration of the simultaneous layer-parallel approach is much faster than for the layer-serial approach due to the layer-parallelization and the reduced state and adjoint accuracy. Therefore, the overall runtime for reaching that same final training result is reduced drastically (bottom figures). Runtime speedups are reported in Table 2. While these results have been computed for selected fixed , it is expected that the speedup scales linearly with increasing numbers of layers, similar to the observation in Table 1.
|Peaks example||1024||256||4096 sec||683 sec||6.0|
|Indian Pines||512||128||2623 min||597 min||4.4|
|MNIST||512||128||619 min||71 min||8.5|
In this paper, we provide a proof-of-concept for layer-parallel training of deep residual neural networks (ResNets). The similarity of training ResNets to the optimal control of nonlinear time-dependent differential equations motivates us to use parallel-in-time methods that have been popular in many engineering applications. The method developed is based on nonlinear multigrid methods and introduces a new form of parallelism across layers.
We demonstrate two options to benefit from the layer-parallel approach. First, the nonlinear multigrid reduction in time (MGRIT) method can be used to replace forward and backward propagation in existing training algorithms, including for stochastic approximation methods such as SGD. In our experiments, this leads to speedup over serial implementations when using more than 16 compute nodes. Second, additional savings can be obtained through the simultaneous layer-parallel training, which uses only inexact forward and backward propagations.
While the reported speedups might seem small in terms of parallel efficiency, these reductions can be of significant importance when considering large overall training runtimes. When bare training runtimes are in the order of days, any runtime reduction is appreciated, as long as computational resources are available. Further, since training a network typically involves a careful choice of hyper-parameters, faster training runtimes will enable faster hyper-parameter optimization and thus eventually lead to better training results in general. Lastly, we mention that such efficiencies for multigrid-in-time are not uncommon , where the nonintrusiveness of MGRIT contributes to the seemingly low efficiency, as does the fact that we are defining the efficiency of MGRIT with respect to an optimal serial algorithm. If the efficiency were defined with respect to MGRIT using 1 core, then the efficiencies would be much higher.
Motivated by these first promising results, we will investigate the use of layer-parallel training for more challenging learning tasks, including more complex image-recognition problems. Further reducing the memory footprint of our algorithm in those applications motivates the use of reversible networks arising from hyperbolic systems . A challenge arising here is the interplay of MGRIT and hyperbolic systems.
-  M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: a system for large-scale machine learning. In OSDI, volume 16, pages 265–283, 2016.
-  Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T. Lin. Learning from data, volume 4. AMLBook New York, NY, USA:, 2012.
-  M. F. Baumgardner, L. L. Biehl, and D. A. Landgrebe. 220 band aviris hyperspectral image data set: June 12, 1992 indian pine test site 3, Sep 2015.
-  Y. Bengio et al. Learning deep architectures for AI. Foundations and trends® in Machine Learning, 2(1):1–127, 2009.
-  G. Biros and O. Ghattas. Parallel Lagrange–Newton–Krylov–Schur Methods for PDE-Constrained Optimization. Part I: The Krylov–Schur Solver. SIAM Journal on Scientific Computing, 27(2):687–713, 2005.
-  A. Bordes, S. Chopra, and J. Weston. Question Answering with Subgraph Embeddings. arXiv preprint arXiv:1406.3676, June 2014.
A. Borzì and V. Schulz.
Computational optimization of systems governed by partial differential equations. SIAM, 2011.
-  T. Bosse, N. Gauger, A. Griewank, S. Günther, L. Kaland, and et al. Optimal design with bounded retardation for problems with non-separable adjoints. International Series of Numerical Mathematics, 165:67–84, 2014.
-  T. Bosse, N. Gauger, A. Griewank, S. Günther, and V. Schulz. One-shot approaches to design optimzation. International Series of Numerical Mathematics, 165:43–66, 2014.
-  T. Bosse, L. Lehmann, and A. Griewank. Adaptive sequencing of primal, dual, and design steps in simulation based optimization. Computational Optimization and Applications, 57(3):731–760, 2014.
-  A. Brandt. Multi–level adaptive solutions to boundary–value problems. Math. Comp., 31(138):333–390, 1977.
-  W. L. Briggs, V. E. Henson, and S. F. McCormick. A multigrid tutorial. SIAM, Philadelphia, PA, USA, 2nd edition, 2000.
-  B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert, and E. Holtham. Reversible architectures for arbitrarily deep residual neural networks. In AAAI Conference on AI, 2018.
-  T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
-  R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12(Aug):2493–2537, 2011.
-  J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223–1231, 2012.
-  V. Dobrev, T. Kolev, N. Petersson, and J. Schroder. Two-level convergence theory for multigrid reduction in time (MGRIT). Copper Mountain Special Selection, SIAM J. Sci. Comput. (accepted), 2016.
-  W. E. A Proposal on Machine Learning via Dynamical Systems. Comm. Math. Statist., 5(1):1–11, 2017.
-  R. D. Falgout, S. Friedhoff, T. V. Kolev, S. P. MacLachlan, and J. B. Schroder. Parallel time integration with multigrid. SIAM J. Sci. Comput., 36(6):C635–C661, 2014. LLNL-JRNL-645325.
-  R. D. Falgout, S. Friedhoff, T. V. Kolev, S. P. MacLachlan, J. B. Schroder, and S. Vandewalle. Multigrid methods with space–time concurrency. Computing and Visualization in Science, 18(4-5):123–143, 2017.
-  R. D. Falgout, T. A. Manteuffel, B. O’Neill, and J. B. Schroder. Multigrid reduction in time for nonlinear parabolic problems: A case study. SIAM Journal on Scientific Computing, 39(5):S298–S322, 2017.
-  M. J. Gander. 50 years of time parallel time integration. In T. Carraro, M. Geiger, S. Körkel, and R. Rannacher, editors, Multiple Shooting and Time Domain Decomposition, pages 69–114. Springer, 2015.
-  M. J. Gander and S. Vandewalle. Analysis of the parareal time-parallel time-integration method. SIAM J. Sci. Comput., 29(2):556–578, 2007.
-  N. Gauger and E. Özkaya. Single-step one-shot aerodynamic shape optimization. International Series of Numerical Mathematics, 158:191–204, 2009.
-  M. B. Giles and N. A. Pierce. An introduction to the adjoint approach to design. Flow, Turbulence and Combustion, 65(3):393–415, 2000.
-  A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse. The reversible residual network: Backpropagation without storing activations. In Adv Neural Inf Process Syst, pages 2211–2221, 2017.
-  I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, Nov. 2016.
-  A. Griewank and A. Walther. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. SIAM, 2008.
-  S. Günther, N. R. Gauger, and J. B. Schroder. A non-intrusive parallel-in-time adjoint solver with the XBraid library. Comput. Vis. Sci., 19:85–95, 2018. available at arXiv, math.OC/1705.00663.
-  S. Günther, N. R. Gauger, and J. B. Schroder. A non-intrusive parallel-in-time approach for simultaneous optimization with unsteady pdes. Optimization Methods and Software (to appear), 2018. arXiv preprint arXiv:1801.06356.
-  E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Probl., 34:014004, 2017.
E. Haber, L. Ruthotto, E. Holtham, and S.-H. Jun.
Learning across scales - A multiscale method for convolution neural networks.In Proceedings of the AAAI National Conference on Artificial Intelligence, pages 1–8, 2018.
-  A. Harlap, D. Narayanan, A. Phanishayee, V. Seshadri, N. Devanur, G. Ganger, and P. Gibbons. Pipedream: Fast and efficient pipeline parallel dnn training, 2018.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In , pages 770–778, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. In European Conference on Computer Vision, pages 630–645, Mar. 2016.
-  G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
-  F. N. Iandola, M. W. Moskewicz, K. Ashraf, and K. Keutzer. Firecaffe: near-linear acceleration of deep neural network training on compute clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2592–2600, 2016.
-  K. Ito, K. Kunisch, V. Schulz, and I. Gherman. Approximate nullspace iterations for KKT systems. SIAM Journal on Matrix Analysis and Applications, 31(4):1835–1847, 2010.
-  S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On Using Very Large Target Vocabulary for Neural Machine Translation. arXiv preprint arXiv:1412.2007, Dec. 2014.
-  J. Keuper and F.-J. Preundt. Distributed training of deep neural networks: Theoretical and practical limits of parallel scalability. In Proceedings of the Workshop on Machine Learning in High Performance Computing Environments, pages 19–26. IEEE Press, 2016.
-  A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 61:1097–1105, 2012.
-  Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
-  Y. LeCun, B. E. Boser, and J. S. Denker. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396–404, 1990.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  Y. Lu, A. Zhong, Q. Li, and B. Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.
-  J. Nocedal and S. Wright. Numerical Optimization. Springer Science+Business Media, 2nd edition, 2006.
-  J. B. Schroder. Parallelizing over artificial neural network training runs with multigrid. Technical report, Lawrence Livermore National Laboratory, LLNL-JRNL-736173, arXiv:1708.02276, 2017.
-  U. Trottenberg, C. Oosterlee, and A. Schüller. Multigrid. Academic Press, San Diego, 2001.
-  Q. Wang, P. Moin, and G. Iaccarino. Minimal repetition dynamic checkpointing algorithm for unsteady adjoint calculation. SIAM Journal on Scientific Computing, 31(4):2549–2567, 2009.
-  XBraid: Parallel multigrid in time. software available at https://github.com/XBraid/xbraid/.
-  J. C. Ziems and S. Ulbrich. Adaptive multilevel inexact SQP methods for PDE-constrained optimization. SIAM Journal on Optimization, 21(1):1–40, 2011.