Stable Architectures for Deep Neural Networks

05/09/2017 ∙ by Eldad Haber, et al. ∙ 0

Deep neural networks have become invaluable tools for supervised machine learning, e.g., classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Important issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper we propose new forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.



There are no comments yet.


page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this work, we propose new architectures for Deep Neural Networks (DNN) and exemplarily show their effectiveness for solving supervised Machine Learning (ML) problems; for a general overview about DNN and ML see, e.g., [40, 21, 1, 22] and reference therein. We consider the following classification problem: Assume we are given training data consisting of

feature vectors,

, and label vectors, , whose th components represent the likelihood of an example belonging to class . The goal is to learn a function that approximates the data-label relation on the training data and generalizes well to similar unlabeled data. Our goals in this work are to highlight the relation of the learning problem to dynamic inverse problems, analyze its stability and ill-posedness for commonly used architectures, and derive new architectures that alleviate some of these difficulties for arbitrarily deep architectures.

We are particularly interested in deep learning, i.e., machine learning using neural networks with many hidden layers. DNNs have been successful in supervised learning, particularly when the relationship between the data and the labels is highly nonlinear; see, e.g.,

[34, 30, 5, 35] and reference therein. Their depths (i.e., their number of layers) allow DNNs to express complex data-label relationships since each layer nonlinearly transforms the features and therefore effectively filters the information content.

Given the training data , an inverse problem needs to be solved in order to train a given network architecture. This problem, also called the learning problem, aims at finding a parameterization of the DNN that explains the data-label relation and generalizes well to new unlabeled data. Clearly, using deeper network architectures increases the capacity of the network but also the dimensionality, and thus the computational complexity, of the parameter estimation problem. Additionally, more labeled data is required to calibrate very deep networks reliably. Therefore, despite the fact that neural networks have been used since the early 70’s, deep learning has only recently revolutionized many applications fueled by advances in computational hardware and availability of massive data sets.

Well-known sources of difficulty in deep learning are the dimensionality and non-convexity of the associated optimization problem. Traditionally, stochastic gradient descent methods have been used 

[43]. It has been observed that the performance of the DNN can be highly dependent on the choice of optimization algorithm and sample size [14, 11]. Furthermore, it has been noted that some optimization algorithms yield DNNs that generalize poorly to new unlabeled data [33].

Additional difficulties in deep learning stem from instabilities of the underlying forward model, most importantly, the propagation of features through the DNN. As has been shown in [11], the output of some networks can be unstable with respect to small perturbations in the original features. A related problem is the observation of vanishing or exploding gradients [6]. These results are unsettling since predictions made by networks with unstable forward propagation are very sensitive to small perturbations of the input features (as is common, e.g., in adversarial attacks), which may render the network useless in practice.

The main goal of this work is to gain new insight into the stability of the forward propagation and the well-posedness of the learning problem summarized in the following two questions:

  1. Given a network architecture and parameters obtained by some optimization process, is the forward propagation problem well-posed?

  2. Is the learning problem well-posed? In other words, given sufficient training are there parameters such that the DNN generalizes well or can generalization be improved by adding appropriate regularization?

The first question is important because, while it may be possible to fit the training data even for unstable forward propagation models, the trained network is unlikely to generalize. In other words, small deviations in the data, e.g., due to noise, may be drastically amplified by the forward propagation, resulting in incorrect labels. We show that the forward problem can be thought of as a discretization of an Ordinary Differential Equation (ODE). Therefore, the stability of the network corresponds to the stability of its underlying ODE. Based on this observation we develop stability criteria for a simplified version of the commonly used Residual Network (ResNet) architecture [27] and develop new network architectures that are ensured to be stable and lead to well-posed learning problems.

The paper is organized as follows. In Section 2 we give a brief mathematical derivation of the deep learning problem illustrated using the ResNet architecture. In Section 3 we analyze the stability of the simplified ResNet forward propagation and the well-posedness of the resulting learning problem. Our analysis and examples suggest stability criteria, and as a result, in Section 4, we propose three new stable architectures. In Section 5 we propose regularization functions favoring smoothness of parameters and multi-level ideas to initialize deep networks. In Section 6 we demonstrate the effectiveness of our new architectures on a number of small-scale model problems and explore their effectiveness for solving image classification problems. Finally, in Section 7 we summarize the paper.

2 Mathematical Formulation of the Deep Learning Problem

In this section we briefly describe three main ingredients of deep learning relevant to our work; for a comprehensive introduction see, e.g., [40, 21, 1, 22]. First, we outline forward propagation techniques, which transforms the input features in a nonlinear way to filter their information. Second, we describe classification

, which predicts the class label probabilities using the features at the output layer (i.e., the output of the forward propagation). Finally, we formulate the learning problem, which aims at estimating parameters of the forward propagation and classification that approximate the data-label relation.

For notational convenience we stack the training features and labels row-wise into matrices and .

To exemplify our discussion of forward propagation we consider a simplified version of the Residual Neural Network (ResNet) [27]

model that has been very successful in classifying images using deep network architectures; see 

[22] for other options. In ResNets, the forward propagation of the input values, , through a network consisting of layers is given by


The propagation in Eq. 2.1

is parametrized by the nonlinear activation function

and affine transformations represented by their weights, , and biases, . We augmented the original formulation in [27] by the parameter in order to increase the stability of the forward propagation and allow for a continuous interpretation of the process; see also Section 3. The values are also called hidden layers and is called the output

layer. The activation function is applied element-wise and is typically (piecewise) smooth and monotonically non-decreasing. As two commonly used examples, we consider the hyperbolic tangent and the Rectified Linear Unit (ReLU) activations

The class label probabilities are predicted using the values at the output layers, , a hypothesis function , and its associated weights, , and bias, . Here denotes the -dimensional vector of all ones. For Bernoulli variables (i.e.,

) it is natural to consider the logistic regression function


where the exponential and the division are applied element-wise. For Multinomial distributions we use the softmax function


The learning problem aims at estimating the parameters of the forward propagation (i.e., and ) and the classifier ( and ) so that the DNN accurately approximates the data-label relation for the training data and generalizes to new unlabeled data. As we show below, the learning problem can be cast as a dynamic inverse problem, which provides new opportunities for applying theoretical and computational techniques from parameter estimation to deep learning problems. We phrase learning as an optimization problem


where the loss function

is convex in its first argument and measures the quality of the predicted class label probabilities, the convex regularizer penalizes undesirable (e.g., highly oscillatory) parameters, and the parameter balances between minimizing the data fit and regularity of the parameters. A simple example for a loss function is the sum-of-squared difference function . Since our numerical experiments deal with classification we use cross entropy loss functions. Choosing an ”optimal” regularizer, , and regularization parameter, , is both crucial and nontrivial. Commonly Tikhonov regularization [19, 26, 48], also referred to as weight decay, has been used [23], although, other possibilities enforcing sparsity or other structure have been proposed [41]. We introduce novel regularization functions in Section 5. For simplicity, we assume that a suitable value of is chosen by the user or that it is done dynamically as suggested in [12].

There are numerous approaches to solving the learning problem. In this work we use a simple block coordinate descent method to demonstrate the properties of the forward propagation. Our method alternates between updating the parameters of the classifier fixing the current value of the propagated features and then updating the parameters of the forward propagation while keeping the updated weights of the classifier fixed. The first problem is typically convex and the latter problem is generally non-convex due to the forward propagation process. Both steps are based on subsampling the training data. To this end note that most common loss functions can be written as a sum over all examples, i.e.,

where is a randomly chosen set updated in each iteration of the block coordinate descent method. The size of the batches is a parameter in our algorithm whose choice depends on the size and complexity of the problem and resource considerations. In the following, we assume the sample size is constant; for adaptive selection of sample size see, e.g., [14]. In each iteration of the block coordinate descent scheme, our algorithm approximately solves the resulting classification problem using an Newton-PCG method, i.e., an inexact Newton method that uses a Preconditioned Conjugate Gradient (PCG) method to determine a search direction (see, e.g., [42, 29, 47]). Subsequently, the weights of the forward propagation are updated using a Gauss-Newton-PCG method. Note that gradient and approximated Hessian computations require matrix-vector products with the derivative matrices of the values of the output layer, , with respect to and . The matrix-vector products with the derivative matrix can be computed without its explicit construction through forward and backward propagation, respectively; see also [38]. However, this requires storing (or recomputing) the values at the output layers; see, for instance, Section 4.4 for derivative computation. Therefore, as also suggested in [13], we subsample further to reduce the cost of the Hessian matrix-vector products in our PCG scheme.

Our implementation includes computing the validation error in each iteration of the block coordinate descent method. The final output of our algorithm are the parameters that achieve the lowest validation error.

3 Stability and well-posedness of the forward propagation

In this section we analyze the stability of the ResNet forward problem 2.1 and illustrate why some choices of transformation weights may generate instabilities or prohibit effective learning altogether.

It is well-known that any parameter estimation problem requires a well-posed forward problem, i.e., a problem whose output is continuous with respect to its input. For example, practical image classification algorithms need to be robust against noisy or slightly shifted input images. Ill-posedness of the forward problem implies that even if the estimated parameters lead to a small training error they will probably fail or will do poorly on a perturbation of that data. In other words a network whose forward propagation is ill-posed will generalize poorly. Thus, well-posedness of the forward propagation is a necessary condition to obtain DNNs that generalize well.

The following discussion also gives new perspectives on two well-known phenomena in the deep learning community: Vanishing and exploding gradients; see, e.g., [6]. These phenomena refer to the gradient of the objective function in Eq. 2 and pose a severe challenge for very deep architectures. Note that the gradient represents the sensitivity of the output with respect to a perturbation in the input. Thus, an exploding gradient implies that the output is unstable with respect to the input. Similarly a vanishing gradient implies that the output is insensitive with respect to the input. Clearly both cases prohibit effective training, but more importantly, may not provide DNNs that generalize well.

To understand the phenomena we consider a simplified version of the forward propagation in ResNets given in Eq. 2.1. As pointed out in [25] the forward propagation can be seen as an explicit Euler discretization of the nonlinear Ordinary Differential Equation (ODE)


over a time interval . The final time and the magnitude of control the depth of the network. The ODE is stable if is changing sufficiently slow and


where denotes the real part, is the

th eigenvalue of the Jacobian of the right hand side in Eq. 

3.5, denoted by . A more accurate statement that uses kinematic eigenvalues of the Jacobian can be found in [3]. Here, the Jacobian is


Since the activation function is typically monotonically non-decreasing, i.e., , Eq. 3.6 is satisfied if changes sufficiently slowly and


Controlling the smoothness of can be done by regularization as described in Section 5. To ensure the stability of the overall discrete forward propagation, we also require the discrete version of the ODE to have a sufficiently small as summarized in the following well-known lemma.

Lemma 1 (Stability of Forward Euler Method)

The forward propagation in Eq. 2.1 is stable if

Proof 1

See, e.g., [2, 4] for the proof of stability criteria for the forward Euler method.

The above discussion suggests that the stability of the continuous forward propagation in Eq. 3.8 and its discrete analog Eq. 3.9 need to be added to the optimization problem 2 as constraints. Otherwise, one may obtain some transformation weights, , that may fit the training data but generate an unstable process. As discussed above these solutions cannot be expected to generalize well for other data.

We illustrate the stability issues in ResNet using a simple example.

Example 1 (Stability of ResNet)

For and we consider the forward propagation through a ResNet as given by Eq. 2.1. We consider three networks consisting of identical layers, i.e., on each layer we use the activation , , , and a constant weight matrix. To illustrate the impact of the eigenvalues of the weight matrix on the propagation, we consider three ResNets parameterized by


where and . We consider the feature vectors . After propagating the features through the layers, we illustrate the different propagations in Figure 1. We represent the values at the hidden layers as colored lines in the 2D plane where each color is associated with one feature vector. To highlight the differences in the dynamics in all three cases, we also depict the force field using black arrows in the background. This plot is often referred to as the phase plane diagram.

Figure 1: Phase plane diagrams for ResNets with identical layers parameterized by the weight matrices in Eq. 3.10 starting from three different input features. The values at hidden layers are indicated by colored lines and arrows depict the force field. Left: Due to its positive eigenvalues, the features diverge using leading to an unstable forward propagation. Center: yields a contraction that annihilates differences in the features and renders the learning problem ill-posed. Right: The antisymmetric matrix leads to rotations, preserves distances between the features, and yields well-posed forward propagation and learning.

As can be seen in left subplot, the features diverge away from the origin and each other using . Note that and , which are close together initially, depart into opposite directions. This clearly suggests an unstable forward propagation that cannot be expected to generalize well. In contrast to that, the center subplot shows that yields an accumulation point at the origin. While the forward propagation satisfies Eq. 3.8 and is thus stable, the learning problem, which requires inversion of this process, is ill-posed. Finally, the antisymmetric matrix yields a rotation in the feature space, which preserves the distances between the features and leads to a stable forward propagation and a well-posed learning problem.

Clearly the effects depicted here are more pronounced in deeper networks with more layers and/or larger values for .

While the first case in Example 1 indicates that Eq. 3.8 is a necessary condition for successful learning, the second case suggests that it is not sufficient. In general, when for most times and the network is deep (e.g., long time integration) differences in the initial feature vectors decay. In other words, for all initial conditions we have that as ; compare with center plot in Figure 1. Hence, even though the forward problem is stable the inverse problem is highly ill-posed as it comparable to an inverse heat equation. In these situations, the gradients of the objective function in Eq. 2 will vanish since small changes in the kernels will have no noticeable impact on the values of the outputs.

If we are inspired from the propagation of signals through neurons, then a ResNet as in Eq. 

2.4 with for all consists of neurons that amplify the signal with no upper bound (which is not biological) and a ResNet with can be seen as a lossy network. A moderately lossy network may be advantageous when the input is noisy, since it tends to decay high order oscillations. However, having too much signal loss is clearly harmful as it also annihilates relevant differences in the input features.

In summary, our discussion of ResNets illustrates that stable forward propagation and well-posed learning problems can be obtained for deep networks when


In this case, the forward propagation causes only moderate amplification or loss and thus even deep networks preserve features in the input data and allow for effective learning.

4 Stable Forward Propagation for DNNs

Motivated by the discussion in the previous section, we introduce three new forward propagation methods that are stable for arbitrarily deep neural networks and lead to well-posed learning problems. Our approaches, presented in Section 4.1 and 4.2, use different means to enforce Jacobians whose eigenvalues have very small real part; see Eq. 3.11. The methods in Section 4.2 are inspired by Hamiltonian systems and we propose leapfrog and Verlet integration techniques for forward propagation in Section 4.3. Finally, we compute the derivative of the Verlet method using back propagation [46] and discuss the relation between back propagation and the older and more general adjoint method [32] in Section 4.4.

4.1 Antisymmetric Weight Matrices

Perhaps the simplest way to obtain a stable forward propagation is to construct force fields whose Jacobians are antisymmetric. For example, consider the forward propagation


where is a small constant and

is an identity matrix. The resulting forward propagation can be seen as a forward Euler discretization of the ODE

Since is anti-symmetric its eigenvalues are imaginary, which also holds for the Jacobian in Eq. 3.7. Thus, the continuous ResNet parameterized by the antisymmetric weight matrix is stable and preserves information given an appropriate integration technique and sufficiently small time steps.

To stabilize the discrete forward propagation, which is based on the forward Euler method, we have added diffusion to the system in Eq. 4.12. The amount of diffusion depends on the parameter . While small values of might improve the robustness against noise in the feature vectors, too large values can render the learning problem ill-posed. Alternatively, more advanced time integration methods can be used with , i.e., without adding diffusion.

4.2 Hamiltonian Inspired Neural Networks

Restricting the parameter space to antisymmetric kernels is only one way to obtain a stable forward propagation. Alternatively we can recast forward propagation as a Hamiltonian system, which has the structure

The function is the Hamiltonian. In our application Hamiltonian systems for forward propagation are attractive due to their property to conserve rather than increase or dissipate the energy of the system; see [2, Ch.6] for a general introduction.

The Hamiltonian function measures the energy of the system, which in the autonomous case gets conserved. A simple approach can be derived from the Hamiltonian

where is at least twice continuously differentiable. This leads to the ODE system

Eliminating we obtain a second order system

Inspired by the continuous version of the ResNet forward propagation given in Eq. 3.5 we propose to use the following second order ODE


where in the following we assume that . The dynamical system in Eq. 4.13 is stable for all weight matrices with non-positive real eigenvalues, i.e., that satisfy Eq. 3.8. This constraint can either be added to the optimization problem 2 or, similar to the previous section, be enforced by design. The latter approach can be obtained, e.g., using the parametrization


Note that in contrast to the antisymmetric model in Eq. 4.12, the negative definite model is based on a nonlinear parametrization which might lead to a more challenging optimization problem. Alternatively Eq. 3.8 can be added as a constraint to the optimization problem 2 requiring expensive eigenvalue computations.

The forward propagations Eqs. 2.1, 4.12, or 4.13 require additional constraints on and are limited to square weight matrices. We therefore propose a network that is intrinsically stable and supports non-square weight matrices, i.e., a network whose forward propagation is stable independent of the choice of the weights. To this end, we introduce symmetry into the Hamiltonian system by defining


Note that Eq. 4.15 can be seen as a special case of ResNet with an augmented variable and the associated ODE

This system is stable regardless of the spectrum or size of the matrices

since the overall matrix in the linear transformation is antisymmetric. To ensure stability of the discrete version the time step size needs to be sufficiently small; see Lemma 


4.3 Symplectic Forward Propagation

In this section we use symplectic integration techniques for solving the discrete versions of the Hamiltonian-inspired networks in Eq. 4.13 and Eq. 4.15. Symplectic methods have been shown to capture the long time features of Hamiltonian systems and thus provide a slightly different approach as compared to the forward Euler method used in ResNet 2.4; we refer to [2, Ch.6] for a detailed discussion. For the second-order ODE in Eq. 4.13 we recommend the conservative leapfrog discretization


To discretize the augmented network in Eq. 4.15 we use a Verlet integration where for we have


Since both discretizations are symplectic, the respective Hamiltonians are preserved if the transformation weights are time invariant and the step size is sufficiently small.

We demonstrate the long time dynamics of the two new Hamiltonian-inspired forward propagation methods using a simple example with two-dimensional features and identical layers.

Example 2

Let and and consider the features and . We consider networks with identical layers featuring and hyperbolic tangent as activation function. For the leapfrog integration we use the matrix defined in Eq. 3.10 (recall that ) and a step size of . To illustrate the Verlet integration (which can handle non-square kernels) we use a time step size of and

To expose the short term behavior we use layers and to demonstrate the non-trivial long term characteristics we use layers. We depict the phase plane diagrams for both networks and both depths in Figure 2. Even though we use identical layers with constant kernels the features neither explode nor vanish asymptotically.

Figure 2: Phase space diagrams for the forward propagation using the leapfrog and Verlet methods starting from (blue) and (red). For each network we show the short term () and long term () behavior for identical layers. In both cases non-trivial behavior can be observed even for constant weight matrices. (This figure is optimized for screen use.)

We proposed three new ways of forward propagation to ensure stability for arbitrary numbers of layers. Among the three approaches, the Verlet method is the most flexible as it can handle non-square weighting matrices and does not require additional constraints on .

4.4 Derivatives of the Verlet Method

We now discuss the computation of derivatives for the Verlet method. The derivatives of the ResNet are standard and can be found, e.g., in [28]

. Given the ResNet derivatives the antisymmetric model can be differentiated using chain rule and the time stepping. The leapfrog method can be treated in a similar way to usual ResNets and is therefore omitted.

Our presentation follows the standard approach used in machine learning known as back propagation [28]. However, we note that back propagation is a special case of the adjoint method [7] used in time-dependent optimal control; see, e.g.,[16, 9]. The adjoint method is more general than back propagation as it can also be used compute the gradient of less “standard” time stepping methods. Thus, we discuss the back propagation method in the context of the adjoint method as applied to the Verlet method. We start by the usual sensitivity equation and then discuss how to efficiently compute matrix-vector products.

Exemplarily we show how to differentiate the values at the output layer in Eq. 4.17 with respect to the weight matrix at an arbitrary layer . For simplicity we assume that is square. Derivatives for non-square weight matrices and with respect to the bias, , can be computed along the same lines. Clearly, the values of and at the hidden layers are independent of . Applying the chain rule to Eq. 4.17 we see that


where denotes the Kronecker product and is the identity matrix. We can now differentiate layer by layer, where for we obtain


Combining equations 4.18, 4.19, and 4.20 the derivatives can be written compactly as a block linear system



The linear system in Eq. 4.22 is block triangular with identity matrices on its diagonal and thus can be solved explicitly using forward substitution. Such systems commonly arise in optimal control of time-dependent PDEs and in dynamic inverse problems. For more details involving notes on implementation see, e.g., [45].

In the optimal control literature the matrices and are often referred to as the sensitivity matrices. While the matrices can be dense and large, computing matrix-vector products can be done efficiently by forward propagation. To this end, the right hand side of Eq. 4.22 is multiplied by the vector and the linear system is then solved with a vector right hand side. The multiplication with the transpose, also known as the adjoint method, is done by solving the equation backwards. It is important to note that the back-propagation algorithm [46] is nothing but a particular implementation of the adjoint method discussed much earlier in [32].

5 Regularization

In this section we present derivative-based regularization functions and a multi-level learning approach to ensure smoothness of parameters in deep learning. By biasing the learning process towards smooth time dynamics we aim at improving the stability of the forward propagation (e.g., to ensure that changes sufficiently slow as assumed in Sec. 3) and ultimately the generalization. Intuitively, we cannot expect the network to generalize well in the absence of smoothness. While the presented regularization techniques are commonly employed in other areas of inverse problems [26, 48], e.g., imaging their application to deep learning is, to the best of our knowledge, rather novel.

5.1 Regularizing the Forward Propagation

The perhaps most common regularization strategy used in deep learning is referred to as weight decay, which is equivalent to standard Tikhonov regularization of the form

where denotes the Frobenius norm; see, e.g., [22]. While this regularization reduces the magnitude of the weights, it is not sensitive to rapidly changing weights between adjacent layers. To illustrate why this may be problematic, consider removing a single layer from a deep network. Since the network is deep, we should not expect large changes in the values of the output layer and thus similar classification errors. However, if this layer performs, e.g., a 90 degree rotation of the features while the adjacent layers keep the features unchanged, the effect will be dramatic.

Our interpretation of forward propagation as a system of non-autonomous ODEs and the stability analysis in Sec. 3 motivate that should be smooth, or at least piecewise smooth in time. To this end we propose the following new regularization for the transformation weights


This regularization favors weights that vary smoothly between adjacent layers. Furthermore, as we see in our numerical experiments, the regularization adds robustness to the process. We can easily add or subtract steps without significantly changing the final result, thus adding more generalizing power to our network.

5.2 Regularizing the Classification Weights

We propose using smoothness regularization on the classification weights for image classification problems, e.g., in Convolution Neural Networks (CNN); see; e.g., 

[22, Ch.9]. For motivation, let the examples in represent vectorized images. The network propagates the images in time and generates perturbed images of the same size. The hypothesis function predicts the class label probabilities based on affinely transformed output images, i.e.,

The number of rows in is and the operation is a dot product between the th output image and the classification weights for the th class. Noting that the rows of and the columns of can be interpreted as discrete images sets the stage for developing regularization approaches commonly used in image processing; see, e.g., [17].

To further motivate the importance of regularization, note that if the number of examples is much larger than the number of features (pixels in the image) and if there is no significant redundancy, finding the optimal given and is an over-determined problem. Otherwise the problem is ill-posed and there may be infinitely many weights that yield the same classification on the observed data. The risk in both cases is overfitting since the optimal classification weights can be highly irregular and generalize poorly. Thus, one way to enforce uniqueness and improve generalization is to regularize the weights.

A standard approach in machine learning also known as pooling aims at achieving this goal; see, e.g., [22]. The main idea of pooling is to coarsen the output images and thereby to decrease the dimensionality of the classification problem. The simplest approach is skipping, i.e., subsampling the image

, e.g., at every other pixel. Other common options are average pooling and max-pooling, where image patches are represented by their average and maximum value, respectively. From an inverse problems perspective, pooling can be thought of as a subspace regularization 

[26]. For example, average pooling is similar to requiring to be constant over each image patch.

An alternative approach to regularization is to interpret the th feature, , and the th classification weight, , of the CNN as discretizations of image functions and , respectively, where is the image domain. Assuming is sufficiently regular (which is to be enforced by the regularization) we can see that the probability of the th example belonging to the th class can be viewed as


where denotes the volume of the image domain. To obtain weights that are insensitive to small displacements of the images it is reasonable to favor spatially smooth parameters by using regularization of the form


where is a discretized differential operator. This regularization also embeds the optimal into a suitable function space. For example, using an image gradient ensures that is in the Sobolev space  [20]. Most importantly for our application at hand, derivative-based regularization yields smooth classification weights that, as we see next, can be interpreted by visual inspection.

5.3 Multi-level Learning

As another means of regularizing the problem, we exploit a multi-level learning strategy that gradually increases the number of layers in the network. Our idea is based on the continuous interpretation of the forward propagation in which the number of layers in the network corresponds to the number of discretization points. Our idea is closely related to cascadic multigrid methods [8] and ideas in image processing where multi-level strategies are commonly used to decrease the risk of being trapped in local minima; see, e.g., [39]. More details about multi-level methods in learning can be found in [25].

The basic idea is to first solve the learning problem using a network with only a few layers and then prolongate the estimated weights of the forward propagation to a network with twice as many layers. The prolongated weights are then used to initialize the optimization problem on the finer level. We repeat this process until a user-specified maximum number of layers is reached.

Besides realizing some obvious computational savings arising from the reduced size of the networks, the main motivation behind our approach is to obtain good starting guesses for the next level. This is key since, while deeper architectures offer more flexibility to model complicated data-label relation, they are in our experience more difficult to initialize. Additionally, the Gauss-Newton method used to estimate the parameters of the forward propagation benefits in general from good starting guesses.

6 Numerical Examples

In this section we present numerical examples for classification problems of varying level of difficulty. We begin with three examples aiming at learning classification functions in two variables, which allow us to easily assess and illustrate the performance of the DNNs. In Section 6.4 we show results for the MNIST data set [36, 37], which is a common benchmark problem in image classification.

6.1 Concentric Ellipses

As a first test we consider a small-scale test problem in two dimensions. The test data consists of 1,200 points that are evenly divided into two groups that form concentric ellipsoids; see left subplot in Figure 4. The original data is randomly divided into 1,000 training examples and 200 examples used for validation.

We train the original ResNet, the antisymmetric ResNet, and the Hamiltonian network with Verlet propagation using the block coordinate descent method and a multi-level strategy with layers. Each block coordinate descent iteration consists of a classification step (2 iterations of Newton-PCG with PCG iterations) and a Gauss-Newton-PCG step to update the propagation weights (20 PCG iterations preconditioned by regularization operator). In all examples we use the logistic regression hypothesis function in Eq. 2.2 and activation function. The final time is and the width of the network is . To enforce smoothness of the propagation weights in time we employ the regularizer in Eq. 5.23 weighted by a factor of . No regularization is used in the classification.

We show the performance of the multi-level scheme in the left subplot of Figure 3. For this simple data set, all forward propagation methods achieve an optimal validation accuracy of

at some level. As to be expected, the validation accuracy increases with increasing depth of the network, which also results in more degrees of freedom. The results for the Verlet method at the final level are shown in Figure 

4. The two steps of deep learning (propagation and classification) can be seen in the center plot. The propagation transforms the feature such that they can be linearly separated. The result of the learning process is a network that predicts the class for all points in the 2D plane, which is illustrated in the right subplot of Figure 4.

Figure 3: Multi-level convergence for the two-dimensional examples in Sections 6.16.3. For each level (level corresponds to DNN with layers) we show the best validation accuracy (the optimal value is , which corresponds to a validation error of ).
Figure 4: Classification results for the ellipse problem from Section 6.1 using a Hamiltonian Neural Network with Verlet propagation with layers. Left: Labeled input features representing two concentric ellipses that are not linearly separable. Center: The output features are linearly separable and can be accurately classified. Right: We show the predictions of the network (colors in the background) superimposed by the validation data. (This figure is optimized for screen use.)

6.2 Swiss Roll

We consider another small-scale test problem that is inspired by the swiss roll example. The data is obtained by sampling the vector functions

for and at 513 points each. Every other point along the curve is removed from the data set and used for validation; see left subplot in Figure 5.

Given the data we use the same parameterization of the multi-level and block coordinate descent method as in the previous example. For all networks, except the Hamiltonian network with Verlet forward propagation, we increase the dimensionality of the input feature space to . As before we use the activation, logistic regression function in Eq. 2.2, and choose a final time of . To enforce smoothness of the propagation weights in time we employ the regularizer in Eq. 5.23 weighted by a factor of . No regularization is used in the classification.

We plot the validation accuracy for each network and the different steps of the multi-level strategy in the center subplot of Figure 3. The standard ResNet and the Hamiltonian NN with Verlet forward propagation both achieve an optimal validation accuracy for . However, the convergence considerably faster for the Hamiltonian network that reaches the optimal accuracy with layers. We visualize the results obtained using the Verlet method in Figure 5.

Figure 5: Classification results for swiss roll example described in Section 6.2 using a Hamiltonian network with

layers and the Verlet forward propagation. Left: We show the training data (red and blue dots). Center: The Verlet method propagates the features so that they can be separated by the affine linear logistic regression classifier. Right: We show the interpolation obtained by our network (colored background) and the validation data. (This figure is optimized for screen use.)

6.3 Peaks

We propose a new challenging test problem for classification into multiple classes using the peaks function in MATLAB, which reads,

where . The peaks function is smooth but has some nonlinearities and most importantly non-convex level sets. We discretize the function on a regular grid and divide the points into 5 different classes based on their function value. The points in each class are then randomly subsampled such that the training data approximately represents the volumes of the level sets. In our case the sample points are divided evenly into five classes of size 1,000. We illustrate the test data in the left subplot of Figure 6.

We randomly choose 20% of the data for validation and train the networks using the remaining examples. We use as activation, the softmax hypothesis function in Eq. 2.3, and choose a final time of . To enforce smoothness of the propagation weights in time we employ the regularizer in Eq. 5.23 weighted by a factor of . No regularization is used in the classification. We use the same multi-level strategy and identical parameters in the block coordinate descent methods as in the previous examples.

We first train the standard ResNet and the antisymmetric variant where we use a width of by duplicating the features. The performance of both models is approximately the same; see right subplot in Figure 3. The optimal accuracy, achieved for , is around 98.8%.

For the Hamiltonian network with Verlet forward propagation we use a narrower network containing only the original features (i.e., and no duplication). As in the previous experiments the accuracy generally improves with increasing depth (with one exception between levels 4 and 5). The optimal accuracy is obtained at the final level () with a validation error of . We illustrate the results for the Verlet network in Figure 6. The center subplot shows how the forward propagation successfully rearranges the features such that they can be labeled using a linear classifier. The right subplot shows that the prediction function fits the training data, but also approximates the true level sets.

Figure 6: Classification results for peaks example described in 6.3 using a Hamiltonian network with layers and the Verlet forward propagation. Left: We illustrate the training data by colored dots that represent the class. Center: We show the propagated features and the predictions of the softmax classifier. Right: We depict the predictions of our network (colored background) and the training data. (This figure is optimized for screen use.)

6.4 Mnist

We use the MNIST dataset of hand-written digits [36, 37] to illustrate the applicability of our methods in image classification problems. The data set consists of 60,000 labeled digital images of size showing hand written digits from 0 to 9. We randomly divide the data into 50,000 training and 10,000 validation examples. We train the network using the standard and antisymmetric ResNet and the first-order Hamiltonian network using Verlet integration.

We use a three-level multi-level strategy where the number of layers is and . In each step we use the block coordinate descent method to approximately solve the learning problem and prolongate the forward propagation parameters to the next level using piecewise linear interpolation. The width of the network is 6 (yielding features used in the classification) and we use convolution operators that are fully connected within a given layer to model the linear transformation matrices . The final time is set to . To compute the Gauss-Newton step we first compute the full gradient over all 50,000 examples and then randomly subsample 5,000 terms for Hessian computations in the PCG step. The maximum number of iterations is set to 20 at each layer.

Within each step of the block coordinate descent we solve the classification problem using at most 5 iterations of Newton-PCG with up to 10 inner iterations. We use a discretized Laplacian as a regularization operator in Eq. 5.25 and its shifted symmetric product as a preconditioner to favor smooth modes; see [15]. The regularization parameter used in the classification is . The output values for all examples are used at the final layer and no pooling is performed.

Smooth time dynamics are enforced by penalizing the time derivatives as outlined in Section 5.1 with a manually calibrated regularization parameter of . The regularization operator is used to precondition the PCG iterations in the Gauss-Newton method.

To show the performance of the multi-level strategy, we summarize the training and validation errors at the respective levels in Table 1. The table shows similar performance for all forward propagation methods. Note that both the validation and training error are reduced for larger number of layers, but no overfitting is observed. In this experiment, the multi-level approach considerably simplified the initialization of the deep networks. For example, the initial parameters of the standard ResNet at the final level ( layers) already gave a validation accuracy of 98.41%, which was improved to 98.47%. We illustrate the results of the antisymmetric ResNet, which yields slightly superior performance in terms of validation error, in Figure 7. The smoothness of the classification weights enforced by our regularizer can be seen in the right column.

ResNet anti symmetric ResNet Hamiltonian Verlet
layers TE VE TE VE TE VE
4 0.96% 1.71% 1.13% 1.70% 1.49% 2.29%
8 0.80% 1.59% 0.92% 1.46% 0.82% 1.60%
16 0.73% 1.53% 0.91% 1.38% 0.35% 1.58%
Table 1: Multi-level training result for MNIST data set (see Section 6.4). We show the Training Error (TE) and Validation Error (VE) for the standard and antisymmetric ResNet and the Hamiltonian inspired network with Verlet forward propagation for increasing number of layers in the network. The different forward propagation methods yield comparable results with the antisymmetric ResNet giving slightly lower validation errors at each level.
Figure 7: Classification results for the MNIST data set (see Section 6.4) using an antisymmetric ResNet with layers and 6 neurons. We show two randomly chosen input images, their propagated versions at the output layer, and the classification weights for the two digits shown. The smoothness enforced by the second-order regularization operator is evident in the weights.

7 Summary and conclusions

In this paper, we expose the relation between deep learning and dynamic inverse problems and lay the foundation for fruitful research at the interface of inverse problems and data science. Specifically, we propose new architectures for deep neural networks (DNN) that improve the stability of the forward propagation. Using derivative-based regularization, we also improve the well-posedness of the learning task. We also propose a multi-level strategy that simplifies the choice of initial parameters and thereby simplifies the training of very deep networks. Our forward propagation methods are inspired by Hamiltonian systems and motivated by a stability analysis that is based on a continuous formulation of the learning problem.

Our stability discussion in Section 3 provides new insights why ResNets [27], a commonly used forward propagation scheme in deep learning, can be unstable. Our result is based on a continuous interpretation of a simplified version of ResNets as a system of nonlinear ODEs and a standard stability argument. Using intuitive examples, we show the impact of the spectral properties of the transformation matrices on the stability of the forward propagation and state conditions under which gradients explode, vanish, or are stable. We also note that stability depends on the smoothness of the network parameters over time. Based on our findings and numerical experience, we argue that it is desirable to restrict ResNets to matrices that lead to Jacobians whose real parts of the spectrum are close to zero and penalize the time-derivative of network parameters through regularization. We also emphasize that the well-posedness of the learning problem in ResNets relies on sufficiently small time steps. Extending our analysis to more general versions of ResNet [27] is straightforward and will be investigated in future work.

Motivated by our theoretical considerations, we propose three new forward propagation methods in Section 4 that can be used to obtain well-posed learning problems for arbitrarily deep networks. The first one is a simple modification of ResNet that applies a linear transformation to the transformation matrices to annihilate the real parts of their eigenvalues. The other two methods are inspired by Hamiltonian systems and provide alternative ways to preserve in information in the forward propagation. The Hamiltonian networks require special integration techniques; see [2] for details on their treatment. The second-order forward propagation can be discretized using the leapfrog method, and our implementation of the first-order propagation uses the Verlet method. While the stability of the leapfrog method requires matrices with non-positive real eigenvalues, the Verlet method does not require any restrictions and yields the best performance in our numerical experiments.

Our approach to stabilizing the learning problem for very deep architectures differs significantly from existing approaches, most importantly batch normalization [31]

. Batch normalization scales values at hidden layers to prevent vanishing or exploding gradients and has been shown to improve the efficiency of training deep networks. In contrast to that our approach does not modify the values of the features, but alleviates the need of normalization by constructing stable forward propagation methods.

In order to improve the generalization quality, improve stability, and simplify training of deep networks we also propose new regularization approaches that depend on our continuous formulation of the problem. We use derivative-based regularizers to favor smooth time dynamics and, for image classification problems, spatially smooth classification weights. To further regularize and simplify the initialization of the learning algorithm, we employ a multi-level learning strategy that gradually increases the depth of the network. In our experiments, this approach has been a simple yet effective way to obtain good initializations for the learning problems. Our regularization methods are commonly used in imaging science, e.g., image registration [39], however, to the best of our knowledge not commonly employed in deep learning. Our numerical examples show that approximately solving the regularized learning problem yields works that generalize well even when the number of network parameters exceeds the number of training features.

We illustrate our methods using three academic test problems with available ground-truth and the MNIST problem [37], which is a commonly used benchmark problem for image classification. Our experiments show that the proposed new architectures yield results that are competitive with the established ResNet architecture. This is particularly noteworthy for the proposed anti-symmetric ResNet, where we restrict the dimensionality of the search space to ensure stability.

By establishing a link between deep learning and dynamic inverse problems, we are positive that this work will stimulate further research by both communities. An important item of future work is investigating the impact of the proposed architectures on the performance of learning algorithms. Currently, stochastic gradient descent [43] is commonly used to train deep neural networks due to its computational efficiency and empirical evidence supporting its generalizations [10]. A specific question is if the improvements in the forward propagation and regularization proposed in this paper will lead to better generalization properties of subsampled second-order methods such as [13, 44]. Another thrust of future work is the development of automatic parameter selection strategies for deep learning based on the approaches presented, e.g., in [19, 26, 48, 24, 18]. A particular challenge in this application is the nontrivial relationship between the regularization parameters chosen for the classification and forward propagation parameters.


This work is supported in part by National Science Foundation award DMS 1522599 and by the NVIDIA Corporation through their donation of a Titan X GPU.


  • [1] Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T. Lin. Learning from data, volume 4. AMLBook New York, NY, USA:, 2012.
  • [2] U. Ascher. Numerical methods for Evolutionary Differential Equations. SIAM, Philadelphia, 2010.
  • [3] U. Ascher, R. Mattheij, and R. Russell. Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. SIAM, Philadelphia, 1995.
  • [4] U. Ascher and L. Petzold. Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM, Philadelphia, PA, 1998.
  • [5] Y. Bengio. Learning deep architectures for AI. Foundations and trends® in Machine Learning, 2(1):1–127, 2009.
  • [6] Y. Bengio, P. Simard, and P. Frasconi. Learning Long-Term Dependencies with Gradient Descent Is Difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
  • [7] G. A. Bliss. The use of adjoint systems in the problem of differential corrections for trajectories. JUS Artillery, 51:296–311, 1919.
  • [8] F. A. Bornemann and P. Deuflhard. The cascadic multigrid method for elliptic problems. Numerische Mathematik, 75(2):135–152, 1996.
  • [9] A. Borzì and V. Schulz.

    Computational optimization of systems governed by partial differential equations

    , volume 8.
    SIAM, Philadelphia, PA, 2012.
  • [10] L. Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade. Springer, Berlin, Heidelberg, 2012.
  • [11] L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. arXiv preprint arXiv:1606.04838, 2016.
  • [12] M. Burger, G. Gilboa, S. Osher, and J. Xu. Nonlinear inverse scale space methods. Commun. Math. Sci., 4(1):179–212, 03 2006.
  • [13] R. H. Byrd, G. M. Chin, W. Neveitt, and J. Nocedal. On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning. 21(3):977–995, 2011.
  • [14] R. H. Byrd, G. M. Chin, J. Nocedal, and Y. Wu. Sample size selection in optimization methods for machine learning. Mathematical programming, 134(1):127–155, 2012.
  • [15] D. Calvetti and E. Somersalo. Priorconditioners for linear systems. Inverse Problems, 2005.
  • [16] Y. Cao, S. Li, L. Petzold, and R. Serban. Adjoint Sensitity Analysis for Differential-Aglebraic Equations: The Adjoint DAE System and its Numerical Solution. SIAM J. Sci. Comput., 24(3):1076–1089, 2003.
  • [17] T. F. Chan and J. Shen. Image Processing and Analysis. SIAM, Philadelphia, 2010.
  • [18] E. de Sturler and M. E. Kilmer. A Regularized Gauss–Newton Trust Region Approach to Imaging in Diffuse Optical Tomography. SIAM J. Sci. Comput., 33(5):3057–3086, 2011.
  • [19] H. Engl, M. Hanke, and A. Neubauer. Regularization of Inverse Problems. Kluwer, 1996.
  • [20] L. C. Evans. Partial Differential Equations. American Mathematical Society, San Francisco, 1998.
  • [21] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001.
  • [22] I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016.
  • [23] I. Goodfellow, H. Lee, Q. V. Le, A. Saxe, and A. Y. Ng. Measuring invariances in deep networks. In Advances in neural information processing systems, pages 646–654, 2009.
  • [24] E. Haber and D. Oldenburg. A GCV based methods for nonlinear inverse problems. Computational Geoscience, 4(1):41–63, 2000. n1.
  • [25] E. Haber, L. Ruthotto, and E. Holtham. Learning across scales-a multiscale method for convolution neural networks. arXiv preprint arXiv:1703.02009, 2017.
  • [26] P. C. Hansen. Rank-Deficient and Discrete Ill-Posed Problems. SIAM, Philadelphia, 1997.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 770–778, 2016.
  • [28] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pages 630–645. Springer, 2016.
  • [29] M. Hestenes and E. Stieffel. Mehtods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand., 49:409–436, 1952.
  • [30] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
  • [31] S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift., 2015.
  • [32] R. E. Kalman et al. Contributions to the theory of optimal control. Bol. Soc. Mat. Mexicana, 5(2):102–119, 1960.
  • [33] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima., 2016.
  • [34] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [35] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
  • [36] Y. LeCun, B. E. Boser, and J. S. Denker. Handwritten digit recognition with a back-propagation network. Advances in Neural Networks, 1990.
  • [37] Y. Lecun and C. Cortes.

    The MNIST database of handwritten digits.

  • [38] J. Martens and I. Sutskever. Training Deep and Recurrent Networks with Hessian-Free Optimization. In Neural Networks: Tricks of the Trade, pages 479–535. Springer, Berlin, Heidelberg, 2012.
  • [39] J. Modersitzki. FAIR: Flexible Algorithms for Image Registration. SIAM, Philadelphia, 2009.
  • [40] M. F. Møller. A scaled conjugate gradient algorithm for fast supervised learning. Neural networks, 6(4):525–533, 1993.
  • [41] A. Y. Ng. Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty-first international conference on Machine learning, page 78. ACM, 2004.
  • [42] J. Nocedal and S. Wright. Numerical Optimization. Springer, New York, 1999.
  • [43] H. Robbins and S. Monro. A Stochastic Approximation Method. The annals of mathematical statistics, 1951.
  • [44] F. Roosta-Khorasani and M. W. Mahoney. Sub-Sampled Newton Methods II: Local Convergence Rates. ArXiv e-prints, Jan. 2016.
  • [45] K. Rothauge, E. Haber, and U. Ascher. Numerical computation of the gradient and the action of the hessian for time-dependent pde-constrained optimization problems. arXiv preprint arXiv:1509.03801, 2015.
  • [46] D. Rumelhart, G. Hinton, and J. Williams, R. Learning representations by back-propagating errors. Nature, 323(6088):533–538, 1986.
  • [47] Y. Saad. Iterative Methods for Sparse Linear Systems. Second Edition. SIAM, Philadelphia, 2003.
  • [48] C. R. Vogel. Computational methods for inverse problems. SIAM, Philadelphia, 2001.