Over the last three decades, algorithms inspired by partial differential equations (PDE) have had a profound impact on many processing tasks that involve speech, image, and video data. Adapting PDE models that were traditionally used in physics to perform image processing tasks has led to ground-breaking contributions. An incomplete list of seminal works includes optical flow models for motion estimation, nonlinear diffusion models for filtering of images , variational methods for image segmentation [36, 1, 8], and nonlinear edge-preserving denoising .
A standard step in PDE-based data processing is interpreting the involved data as discretizations of multivariate functions. Consequently, many operations on the data can be modeled as discretizations of PDE operators acting on the underlying functions. This continuous data model has led to solid mathematical theories for classical data processing tasks obtained by leveraging the rich results from PDEs and variational calculus (e.g., ). The continuous perspective has also enabled more abstract formulations that are independent of the actual resolution, which has been exploited to obtain efficient multiscale and multilevel algorithms (e.g., ).
In this paper, we establish a new PDE-interpretation of deep learning tasks that involve speech, image, and video data as features. Deep learning is a form of machine learning that uses neural networks with many hidden layers [4, 31]. Although neural networks date back at least to the 1950s , their popularity soared a few years ago when deep neural networks (DNNs) outperformed other machine learning methods in speech recognition  and image classification 
. Deep learning also led to dramatic improvements in computer vision, e.g., surpassing human performance in image recognition[25, 29, 31]. These results ignited the recent flare of research in the field. To obtain a PDE-interpretation, we use a continuous representation of the images and extend recent works by [19, 15]
, which relate deep learning problems for general data types to ordinary differential equations (ODE).
Deep neural networks filter input features using several layers whose operations consist of element-wise nonlinearities and affine transformations. The main idea of convolutional neural networks (CNN) 
is to base the affine transformations on convolution operators with compactly supported filters. Supervised learning aims at learning the filters and other parameters, which are also called weights, from training data. CNNs are widely used for solving large-scale learning tasks involving data that represent a discretization of a continuous function, e.g., voice, images, and videos[29, 30, 32]. By design, each CNN layer exploits the local relation between image information, which simplifies computation .
Despite their enormous success, deep CNNs still face critical challenges including designing a CNN architecture that is effective for a practical learning task, which requires many choices. In addition to the number of layers, also called depth of the network, important aspects are the number of convolution filters at each layer, also called the width of the layers, and the connections between those filters. A recent trend is to favor deep over wide networks, aiming at improving generalization (i.e., the performance of the CNN on new examples that were not used during the training) . Another key challenge is designing the layer, i.e., choosing the combination of affine transformations and nonlinearities. A practical but costly approach is to consider depth, width, and other properties of the architecture as hyper-parameters and jointly infer them with the network weights . Our interpretation of CNN architectures as discretized PDEs provides new mathematical theories to guide the design process. In short, we obtain architectures by discretizing the underlying PDE through adequate time integration methods.
In addition to substantial training costs, deep CNNs face fundamental challenges when it comes to their interpretability and robustness. In particular, CNNs that are used in mission-critical tasks (such as driverless cars) face the challenge of being ”explainable.” Casting the learning task within nonlinear PDE theory allows us to understand the properties of such networks better. We believe that further research into the mathematical structures presented here will result in a more solid understanding of the networks and will close the gap between deep learning and more mature fields that rely on nonlinear PDEs such as fluid dynamics. A direct impact of our approach can be observed when studying, e.g., adversarial examples. Recent works  indicate that the predictions obtained by deep networks can be very sensitive to perturbations of the input images. These findings motivate us to favor networks that are stable, i.e., networks whose output are robust to small perturbations of the input features, similar to what PDE analysis suggests.
In this paper, we consider residual neural networks (ResNet) , a very effective type of neural networks. We show that residual CNNs can be interpreted as a discretization of a space-time differential equation. We use this link for analyzing the stability of a network and for motivating new network models that bear similarities with well-known PDEs. Using our framework, we present three new architectures. First, we introduce parabolic CNNs that restrict the forward propagation to dynamics that smooth image features and bear similarities with anisotropic filtering [38, 45, 12]. Second, we propose hyperbolic CNNs that are inspired by Hamiltonian systems and finally, a third, second-order hyperbolic CNN. As to be expected, those networks have different properties. For example, hyperbolic CNNs approximately preserve the energy in the system, which sets them apart from parabolic networks that smooth the image data, reducing the energy. Computationally, the structure of a hyperbolic forward propagation can be exploited to alleviate the memory burden because hyperbolic dynamics can be made reversible on the continuous and discrete levels. The methods suggested here are closely related to reversible ResNets [16, 9].
The remainder of this paper is organized as follows. In Section 2, we provide a brief introduction into residual networks and their relation to ordinary and, in the case of convolutional neural networks, partial differential equations. In Section 3, we present three novel CNN architectures motivated by PDE theory. Based on our continuous interpretation we present regularization functionals that enforce the smoothness of the dynamical systems, in Section 4. In Section 5, we present numerical results for image classification that indicate the competitiveness of our PDE-based architectures. Finally, we highlight some directions for future research in Section 6.
2 Residual Networks and Differential Equations
The abstract goal of machine learning is to find a function such that
accurately predicts the result of an observed phenomenon (e.g., the class of an image, a spoken word, etc.). The function is parameterized by the weight vectorthat is trained using examples. In supervised learning, a set of input features and output labels is available and used to train the model . The output labels are vectors whose components correspond to the estimated probability of a particular example belonging to a given class. As an example, consider the image classification results in Fig. 1 where the predicted and actual labels are visualized using bar plots. For brevity, we denote the training data by and .
In deep learning, the function
consists of a concatenation of nonlinear functions called hidden layers. Each layer is composed of affine linear transformations and pointwise nonlinearities and aims at filtering the input features in a way that enables learning. As a fairly general formulation, we consider an extended version of the layer used in, which filters the features as follows
Here, the parameter vector, , is partitioned into three parts where and parameterize the linear operators and , respectively, and are the parameters of the normalization layer
. The activation functionis applied component-wise. Common examples are . A deep neural network can be written by concatenating many of the layers given in (1).
When dealing with image data, it is common to group the features into different channels (e.g., for RGB image data there are three channels) and define the operators and as block matrices consisting of spatial convolutions. Typically each channel of the output image is computed as a weighted sum of each of the convolved input channels. To give an example, assume that has three input and two output channels and denote by a standard convolution operator . In this case, we can write as
where denotes the parameters of the stencil of the -th convolution operator.
. This layer computes the empirical mean and standard deviation of each channel in the input images across the spatial dimensions and examples and uses this information to normalize the statistics of the output images. While the coupling of different examples is counter-intuitive, its use is wide-spread and motivated by empirical evidence showing a faster convergence of training algorithms. The weightsrepresent scaling factors and biases (i.e., constant shifts applied to all pixels in the channel) for each output channel that are applied after the normalization.
ResNets have recently improved the state-of-the-art in several benchmarks including computer vision contests on image classification [25, 29, 31]. Given the input features , a ResNet unit with layers produces a filtered version as follows
where are the weights (convolution stencils and biases) of the th layer. To emphasize the dependency of this process on the weights, we denote .
Note that the dimension of the feature vectors (i.e., the image resolution and the number of channels) is the same across all layers of a ResNets unit, which is limiting in many practical applications. Therefore, implementations of deep CNNs contain a concatenation of ResNet units with other layers that can change, e.g., the number of channels and the image resolution (see, e.g.,[22, 9]).
In image recognition, the goal is to classify the output of (3), , using, e.g., a linear classifier modeled by a fully-connected layer, i.e., an affine transformation with a dense matrix. To avoid confusion with the ResNet units we denote these transformations as , where the columns of represent a distributed bias and is a vector of all ones. The parameters of the network and the classifier are unknown and have to be learned. Thus, the goal of learning is to estimate the network parameters, , and the weights of the classifier, , by approximately solving the optimization problem
is a loss function, which is convex in its first argument, and
is a convex regularizer discussed below. Typical examples of loss functions are the least-squares function in regression and logistic regression or cross entropy functions in classification.
The optimization problem in (4) is challenging for several reasons. First, it is a high-dimensional and non-convex optimization problem. Therefore one has to be content with local minima. Second, the computational cost per example is high, and the number of examples is large. Third, very deep architectures are prone to problems such as vanishing and exploding gradients  that may occur when the discrete forward propagation is unstable .
2.1 Residual Networks and ODEs
We derived a continuous interpretation of the filtering provided by ResNets in . Similar observations were made in [15, 10]. The ResNet in (3) can be seen as a forward Euler discretization (with a fixed step size of ) of the initial value problem
Here, we introduce an artificial time . The depth of the network is related to the arbitrary final time and the magnitude of the matrices and in (1). This observation shows the relation between the learning problem (4) and parameter estimation of a system of nonlinear ordinary differential equations. Note that this interpretation does not assume any particular structure of the layer .
The continuous interpretation of ResNets can be exploited in several ways. One idea is to accelerate training by solving a hierarchy of optimization problems that gradually introduce new time discretization points for the weights, . Also, new numerical solvers based on optimal control theory have been proposed in . Another recent work  uses more sophisticated time integrators to solve the forward propagation and the adjoint problem (in this context commonly called back-propagation), which is needed to compute derivatives of the objective function with respect to the network weights.
2.2 Convolutional ResNets and PDEs
In the following, we consider learning tasks involving features given by speech, image, or video data. For these problems, the input features, , can be seen as a discretization of a continuous function . We assume that the matrices and in (1) represent convolution operators . The parameters and denote the width of the layer, i.e., they correspond to the number of input, intermediate, and output features of this layer.
We now show that a particular class of deep residual CNNs can be interpreted as nonlinear systems of PDEs. For ease of notation, we first consider a one-dimensional convolution of a feature with one channel and then outline how the result extends to higher space dimensions and multiple channels.
Assume that the vector represents a one-dimensional grid function obtained by discretizing at the cell-centers of a regular grid with cells and a mesh size , i.e., for
Assume, e.g., that the operator in (1) is parameterized by the stencil . Applying a coordinate change, we see that
Here, the weights are given by
which is a non-singular linear system for any . We denote by the unique solution of this linear system. Upon taking the limit, , this observation motivates one to parameterize the convolution operator as
The individual terms in the transformation matrix correspond to reaction, convection, diffusion and the bias term in (1) is a source/sink term, respectively. Note that higher-order derivatives can be generated by multiplying different convolution operators or increasing the stencil size.
This simple observation exposes the dependence of learned weights on the image resolution, which can be exploited in practice, e.g., by multiscale training strategies . Here, the idea is to train a sequence of network using a coarse-to-fine hierarchy of image resolutions (often called image pyramid). Since both the number of operations and the memory required in training is proportional to the image size, this leads to immediate savings during training but also allows one to coarsen already trained networks to enable efficient evaluation. In addition to computational benefits, ignoring fine-scale features when training on the coarse grid can also reduce the risk of being trapped in an undesirable local minimum, which is an observation also made in other image processing applications.
Our argument extends to higher spatial dimensions. In 2D, e.g., we can relate the stencil parametrized by to
To obtain a fully continuous model for the layer in (1), we proceed the same way with . In view of(2), we note that when the number of input and output channels is larger than one, and lead to a system of coupled partial differential operators.
Given the continuous space-time interpretation of CNN we view the optimization problem (4) as an optimal control problem and, similarly, see learning as a parameter estimation problem for the time-dependent nonlinear PDE (5). Developing efficient numerical methods for solving PDE-constrained optimization problems arising in optimal control and parameter estimation has been a fruitful research endeavor and led to many advances in science and engineering (for recent overviews see, e.g., [7, 24, 6]). Using the theoretical and algorithmic framework of optimal control in machine learning applications has gained some traction only recently (e.g., [15, 19, 9, 33, 11]).
3 Deep Neural Networks motivated by PDEs
It is well-known that not every time-dependent PDE is stable with respect to perturbations of the initial conditions . Here, we say that the forward propagation in (5) is stable if there is a constant independent of such that
where and are solutions of (5) for different initial values and is the Frobenius norm. The stability of the forward propagation depends on the values of the weights that are chosen by solving (4). In the context of learning, the stability of the network is critical to provide robustness to small perturbations of the input images. In addition to image noise, perturbations could also be added deliberately to mislead the network’s prediction by an adversary. There is some recent evidence showing the existence of such perturbations that reliably mislead deep networks by being barely noticeable to a human observer (e.g., [18, 37, 35]).
To ensure the stability of the network for all possible weights, we propose to restrict the space of CNNs. As examples of this general idea, we present three new types of residual CNNs that are motivated by parabolic and first- and second-order hyperbolic PDEs, respectively. The construction of our networks guarantees that the networks are stable forward and, for the hyperbolic network, stable backward in time.
Though it is common practice to model and in (1) independently, we note that it is, in general, hard to show the stability of the resulting network. This is because, the Jacobian of with respect to the features has the form
where denotes the derivatives of the pointwise nonlinearity and for simplicity we assume . Even in this simplified setting, the spectral properties of , which impact the stability, are unknown for arbitrary choices of and .
As one way to obtain a stable network, we introduce a symmetric version of the layer in (1) by choosing in (1). To simplify our notation, we drop the subscript of the operator and define the symmetric layer
It is straightforward to verify that this choice leads to a negative semi-definite Jacobian for any non-decreasing activation function. As we see next, this choice also allows us to link the discrete network to different types of PDEs.
3.1 Parabolic CNN
Note that (8) is equivalent to the heat equation if , and . This motivates us to refer to this network as a parabolic CNN. Nonlinear parabolic PDEs are widely used, e.g., to filter images [38, 45, 12] and our interpretation implies that the networks can be viewed as an extension of such methods.
The similarity to the heat equation motivates us to introduce a new normalization layer motivated by total variation denoising. For a single example that can be grouped into channels, we define
where the operator computes the sum over all channels for each pixel, the square, square root, and the division are defined component-wise, is fixed. As for the batch norm layer, we implement with trainable weights corresponding to global scaling factors and biases for each channel. In the case that the convolution is reduced to a discrete gradient, leads to the regular dynamics in TV denoising.
Parabolic PDEs have a well-known decay property that renders them robust to perturbations of the initial conditions. For the parabolic CNN in (8) we can show the following stability result.
If the activation function is monotonically non-decreasing, then the forward propagation through a parabolic CNN satisfies (6).
For ease of notation, we assume that no normalization layer is used, i.e., in (8). We then show that is a monotone operator. Note that for all
Where is the standard inner product and the inequality follows from the monotonicity of the activation function, which shows that
Integrating this inequality over yields stability as in (6). The proof extends straightforwardly to cases when a normalization layer with scaling and bias is included.
One way to discretize the parabolic forward propagation (8) is using the forward Euler method. Denoting the time step size by this reads
where . The discrete forward propagation of a given example is stable if satisfies
and accurate if is chosen small enough to capture the dynamics of the system. Here, denotes the
th eigenvalue of the Jacobian ofwith respect to the features at a time point . If we assume, for simplicity, that no normalization layer is used, the Jacobian is
If the activation function is monotonically nondecreasing, then everywhere. In this case, all eigenvalues of are real and bounded above by zero since is also symmetric. Thus, there is an appropriate that renders the discrete forward propagation stable. In our numerical experiments, we aim at ensuring the stability of the discrete forward propagation by limiting the magnitude of elements in by adding bound constraints to the optimization problem (4).
3.2 Hyperbolic CNNs
Different types of networks can be obtained by considering hyperbolic PDEs. In this section, we present two CNN architectures that are inspired by hyperbolic systems. A favorable feature of hyperbolic equations is their reversibility. Reversibility allows us to avoid storage of intermediate network states, thus achieving higher memory efficiency. This is particularly important for very deep networks where memory limitation can hinder training (see  and ).
Introducing an auxiliary variable (i.e., by partitioning the channels of the original features), we consider the dynamics
We showed in  that the eigenvalues of the associated Jacobian are imaginary. When assuming that and change sufficiently slow in time stability as defined in (6) is obtained. A more precise stability result can be established by analyzing the kinematic eigenvalues of the forward propagation .
We discretize the dynamic using the symplectic Verlet integration (see, e.g.,  for details)
for using a fixed step size . This dynamic is reversible, i.e., given and it can also be computed backwards
for . These operations are numerically stable for the Hamiltonian CNN (see  for details).
An alternative way to obtain hyperbolic CNNs is by using a second-order dynamics
The resulting forward propagation is associated with a nonlinear version of the telegraph equation , which describes the propagation of signals through networks. Hence, one could claim that second-order networks better mimic biological networks and are therefore more appropriate than first-order networks for approaches that aim at imitating the propagation through biological networks.
We discretize the second-order network using the Leapfrog method. For and fixed this reads
We set to denote the initial condition. Similar to the symplectic integration in (10), this scheme is reversible.
We show that the second-order network is stable in the sense of (6) when we assume stationary weights. Weaker results for the time-dependent dynamic are possible assuming to be bounded.
Let be constant in time and assume that the activation function satisfies for all . Then, the forward propagation through the second-order network satisfies (6).
For brevity, we denote and consider the forward propagation of a single example. Let be a solution to (11) and consider the energy
Given that for all by assumption, this energy can be bounded as follows
where is the energy associated with the linear wave-like hyperbolic equation
Since by assumption is constant in time, we have that
Thus, the energy of the hyperbolic network in (12) is positive and bounded from above by the energy of the linear wave equation. Applying this argument to the initial condition we derive (6) and thus the forward propagation is stable.
The proposed continuous interpretation of the CNNs also provides new perspectives on regularization. To enforce stability of the forward propagation, the linear operator in (7) should not change drastically in time. This suggests adding a smoothness regularizer in time. In  a -seminorm was used to smooth kernels over time with the goal to avoid overfitting. A theoretically more appropriate function space consists of all kernels that are piecewise smooth in time. To this end, we introduce the regularizer
where the function is a smoothed -norm with conditioning parameter . The first term of can be seen as a total variation  penalty in time that favors piecewise constant dynamics. Here, are regularization parameters that are assumed to be fixed.
A second important aspect of stability is to keep the time step sufficiently small. Since can be absorbed in we use the box constraint for all , and fix the time step size to in our numerical experiments.
5 Numerical Experiments
. Our central goal is to show that, despite their modeling restrictions, our new network types achieve competitive results. We use our basic architecture for all experiments, do not excessively tune hyperparameters individually for each case, and employ a simple data augmentation technique consisting of random flipping and cropping.
Our architecture is similar to the ones in [22, 9] and contains an opening layer, followed by several blocks each containing a few time steps of a ResNet and a connector that increases the width of the CNN and coarsens the images. Our focus is on the different options for defining the ResNet block using parabolic and hyperbolic networks. To this end, we choose the same basic components for the opening and connecting layers. The opening layer increases the number of channels from three (for RGB image data) to the number of channels of the first ResNet using convolution operators with stencils, a batch normalization layer and a ReLU activation function. We build the connecting layers using convolution operators that increase the number of channels, a batch normalization layer, a ReLU activation, and an average pooling operator that coarsens the images by a factor of two. Finally, we obtain the output features by averaging the features of each channel to ensure translation-invariance. The ResNet blocks use the symmetric layer (7) including the total variation normalization (9) with . The classifier is modeled using a fully-connected layer, a softmax transformation, and a cross-entropy loss.
In order to estimate the weights, we use a standard stochastic gradient descent (SGD) method with momentum of
. We use a piecewise constant step size (in this context also called learning rate), starting with 0.1, which is decreased by a factor of 0.2 at a-priori specified epochs. For STL-10 and CIFAR-10 examples, we perform 60, 20, and 20 epochs with step sizes of, respectively. For the more challenging CIFAR-100 data set, we use 60, 40, 40, 40, and 20 epochs with step sizes of 0.1, 0.02, 0.004, 0.0008, 0.00016, respectively. In all examples, the SGD steps are computed using mini-batches consisting of 125 randomly chosen examples. For data augmentation, we apply a random horizontal flip (
probability), pad the images by a factor of 1/16 with zeros into all directions and randomly crop the image by 1/8 of the pixels, counting from the lower-left corner. The training is performed using the open-source software Meganet on a workstation running Ubuntu 16.04 and MATLAB 2018b with two Intel(R) Xeon(R) CPU E5-2620 v4 and 64 GB of RAM. We use NVIDIA Titan X GPU for accelerating the computation through the frameworks CUDA 9.1 and CuDNN 7.0.
Results for STL-10.
The STL-10 dataset  contains 13,000 digital color images of size that are evenly divided into ten categories, which can be inferred from Fig. 1. The dataset is split into 5,000 training and 8,000 test images. The STL-10 data is a popular benchmark test for image classification algorithms and challenging due to the relatively small number of training images.
For each dynamic, the network uses four ResNet blocks with 16, 32, 64, and 128 channels and image sizes of , , , , respectively. Within the ResNet blocks, we perform three time steps with a step size of and include a total variation normalization layer and ReLU activation. This architecture leads to 324,794 trainable weights for the Hamiltonian network and 618,554 weights for the parabolic and second-order network, respectively. We note that our network are substantially smaller than commonly used ResNets. For example, the architectures in  contain about 2 million parameters. Reducing the number of parameters is important during training and, e.g., when trained networks have to be deployed on devices with limited memory. The regularization parameters are and .
To show how the generalization improves as more training data becomes available, we train the network with an increasing number of examples that we choose randomly from the training dataset. We also randomly sample 1,000 examples from the remaining training data to build a validation set, which we use to monitor the performance after each full epoch. We use no data augmentation in this experiment. In all cases, the training accuracy was close to 100%. After the training, we compute the accuracy of the networks parameterized by the weights that performed best on the validation data for all the 8,000 test images; see Fig. 2. The predictions of the three networks may vary for single examples without any apparent pattern (see also Fig. 1). However, overall their performance and convergence are comparable which leads to similarities in the confusion matrices; see Fig. 3.
To show the overall performance of the networks, we train the networks using a random partition of the examples into 4,000 training and 1,000 validation data. For data augmentation, we use horizontal flipping and random cropping. The performance of the networks on the validation data after each epoch can be seen in the bottom plot of Fig. 2. As before, the optimization found weights that almost perfectly fit the training data. After the training, we compute the loss and classification accuracy for all the test images. For this example, the parabolic and Hamiltonian network perform slightly superior to the second-order network % and % vs. % test classification accuracy, respectively. It is important to emphasize that the Hamiltonian network achieves the best test accuracy using only about half as many trainable weights as the other two networks. These results are competitive with the results reported, e.g., in [44, 14]. Fine-tuning of hyperparameters such as step size, number of time steps, and width of the network may achieve additional improvements for each dynamic. Using these means and performing training on all 5,000 images we achieved a test accuracy of around 85% in .
|number of||test data (8,000)||number of||test data (10,000)||number of||test data (10,000)|
|weights||accuracy %(loss)||weights||accuracy %(loss)||weights||accuracy %(loss)|
|Parabolic||618,554||77.0% (0.711)||502,570||88.5% (0.333)||652,484||64.8% (1.234)|
|Hamiltonian||324,794||78.3% (0.789)||264,106||89.3% (0.349)||362,180||64.9% (1.237)|
|Second-order||618,554||74.3% (0.810)||502,570||89.2% (0.333)||652,484||65.4% (1.232)|
Results for CIFAR 10/100.
For an additional comparison of the proposed architectures we use the CIFAR-10 and CIFAR-100 datasets . Each of these datasets consists of 60,000 labeled RGB images of size that are chosen from the 80 million tiny images dataset. In both cases, we use 50,000 images for training and validation and keep the remaining 10,000 to test the generalization of the trained weights. While CIFAR-10 consists of 10 categories the CIFAR-100 dataset contains 100 categories and, thus, classification is more challenging.
Our architectures contain three blocks of parabolic or hyperbolic networks between which the image size is reduced from to . For the simpler CIFAR-10 problem, we use a narrower network with channels while for the CIFAR-100 challenge we use more channels (32, 64, and 128) and add a final connecting layer that increases the number of channels to 256. This leads to networks whose number of trainable weights vary between 264,106 and 652,484; see also Table 1. As regularization parameters we use and , which is similar to .
As for the STL-10 data set, the three proposed architectures achieved comparable results on these benchmarks; see convergence plots in Figure 4 and test accuracies in Table 1. In all cases, the training loss is near zero after the training. For these datasets, the second-order network slightly outperforms the other networks. Additional tuning of the learning rate, regularization parameter, and other hyperparameters may further improve the results shown here. Using those techniques architectures with more time-steps and the entire training dataset we achieved about 5% higher accuracy on CIFAR-10 and 9% higher accuracy on CIFAR-100 in .
6 Discussion and Outlook
In this paper, we establish a link between deep residual convolutional neural networks and PDEs. The relation provides a general framework for designing, analyzing, and training those CNNs. It also exposes the dependence of learned weights on the image resolution used in training. Exemplarily, we derive three PDE-based network architectures that are forward stable (the parabolic network) and forward-backward stable (the hyperbolic networks).
It is well-known that different types of PDEs have different properties. For example, linear parabolic PDEs have decay properties while linear hyperbolic PDEs conserve energy. Hence, it is common to choose different numerical techniques for solving and optimizing different kinds of PDEs. The type of the underlying PDE is not known a-priori for a standard convolutional ResNet as it depends on the trained weights. This renders ensuring the stability of the trained network and the choice of adequate time-integration methods difficult. These considerations motivate us to restrict the convolutional ResNet architecture a-priori to discretizations of nonlinear PDEs that are stable.
In our numerical examples, our new architectures lead to an adequate performance despite the constraints on the networks. In fact, using only networks of relatively modest size, we obtain results that are close to those of state-of-the-art networks with a considerably larger number of weights. This may not hold in general, and future research will show which types of architectures are best suited for a learning task at hand. Our intuition is that, e.g., hyperbolic networks may be preferable over parabolic ones for image extrapolation tasks to ensure the preservation of edge information in the images. In contrast to that, we anticipate parabolic networks to perform superior for tasks that require filtering, e.g., image denoising.
We note that our view of CNNs mirrors the developments in PDE-based image processing in the 1990s. PDE-based methods have since significantly enhanced our mathematical understanding of image processing tasks and opened the door to many popular algorithms and techniques. We hope that continuous models of CNNs will result in similar breakthroughs and, e.g., help streamline the design of network architectures and improve training outcomes with less trial and error.
L.R. is supported by the U.S. National Science Foundation (NSF) through awards DMS 1522599 and DMS 1751636 and by the NVIDIA Corporation’s GPU grant program. We thank Martin Burger for outlining how to show stability using monotone operator theory and Eran Treister and other contributors of the Meganet package. We also thank the Isaac Newton Institute (INI) for Mathematical Sciences for support and hospitality during the programme on Generative Models, Parameter Learning and Sparsity (VMVW02) when work on this paper was undertaken. INI was supported by EPSRC Grant Number: LNAG/036, RG91310.
-  L. Ambrosio and V. M. Tortorelli. Approximation of Functionals Depending on Jumps by Elliptic Functionals via Gamma-Convergence. Commun. Pure Appl. Math., 43(8):999–1036, 1990.
-  U. Ascher. Numerical methods for Evolutionary Differential Equations. SIAM, Philadelphia, USA, 2010.
-  U. Ascher, R. Mattheij, and R. Russell. Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. SIAM, Philadelphia, Philadelphia, 1995.
-  Y. Bengio et al. Learning deep architectures for AI. Found. Trends Mach. Learn., 2(1):1–127, 2009.
-  Y. Bengio, P. Simard, and P. Frasconi. Learning Long-Term Dependencies with Gradient Descent Is Difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
-  L. T. Biegler, O. Ghattas, M. Heinkenschloss, D. Keyes, and B. van Bloemen Waanders, editors. Real-time PDE-constrained Optimization. Society for Industrial and Applied Mathematics (SIAM), 2007.
-  A. Borzì and V. Schulz. Computational optimization of systems governed by partial differential equations, volume 8. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2012.
-  T. F. Chan and L. A. Vese. Active contours without edges. IEEE Trans. Image Process., 10(2):266–277, 2001.
-  B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert, and E. Holtham. Reversible architectures for arbitrarily deep residual neural networks. In AAAI Conference on AI, 2018.
-  P. Chaudhari, A. Oberman, S. Osher, S. Soatto, and G. Carlier. Deep Relaxation: Partial Differential Equations for Optimizing Deep Neural Networks. arXiv preprint 1704.04932, Apr. 2017.
-  T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
-  Y. Chen and T. Pock. Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell., 39(6):1256–1272, 2017.
A. Coates, A. Ng, and H. Lee.
An Analysis of Single-Layer Networks in Unsupervised Feature
Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pages 215–223, June 2011.
A. Dundar, J. Jin, and E. Culurciello.
Convolutional Clustering for Unsupervised Learning.In ICLR, Nov. 2015.
-  W. E. A Proposal on Machine Learning via Dynamical Systems. Comm. Math. Statist., 5(1):1–11, 2017.
A. N. Gomez, M. Ren, R. Urtasun, and R. B. Grosse.
The reversible residual network: Backpropagation without storing activations.In Adv Neural Inf Process Syst, pages 2211–2221, 2017.
-  I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, Nov. 2016.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. arXiv.org, Dec. 2014.
-  E. Haber and L. Ruthotto. Stable architectures for deep neural networks. Inverse Probl., 34:014004, 2017.
-  E. Haber, L. Ruthotto, and E. Holtham. Learning across scales - A multiscale method for convolution neural networks. In AAAI Conference on AI, volume abs/1703.02009, pages 1–8, 2017.
-  P. C. Hansen, J. G. Nagy, and D. P. O’Leary. Deblurring Images: Matrices, Spectra and Filtering. Matrices, Spectra, and Filtering. SIAM, Philadelphia, USA, 2006.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
-  J. M. Hernández-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general framework for constrained bayesian optimization using information-based search. J. Mach. Learn. Res., 17:2–51, 2016.
-  R. Herzog and K. Kunisch. Algorithms for PDE-constrained optimization. GAMM-Mitteilungen, 33(2):163–176, Oct. 2010.
-  G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag., 29(6):82–97, 2012.
-  B. K. Horn and B. G. Schunck. Determining optical flow. Artificial intelligence, 17(1-3):185–203, 1981.
-  S. Ioffe and C. Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In 32nd International Conference on Machine Learning, pages 448–456, Feb. 2015.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
-  A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst, 61:1097–1105, 2012.
-  Y. LeCun and Y. Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361:255–258, 1995.
-  Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
-  Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, page 253–256, 2010.
-  Q. Li, L. Chen, C. Tai, and E. Weinan. Maximum principle based algorithms for deep learning. The Journal of Machine Learning Research, 18(1):5998–6026, 2017.
-  J. Modersitzki. FAIR: Flexible Algorithms for Image Registration. Fundamentals of Algorithms. SIAM, Philadelphia, USA, 2009.
-  S. M. Moosavi-Dezfooli, A. Fawzi, O. F. arXiv, and 2017. Universal adversarial perturbations. openaccess.thecvf.com.
-  D. Mumford and J. Shah. Optimal Approximations by Piecewise Smooth Functions and Associated Variational-Problems. Commun. Pure Appl. Math., 42(5):577–685, 1989.
-  K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. In 26th Symposium on Oper. Sys. Princ., pages 1–18. ACM Press, New York, USA, 2017.
-  P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell., 12(7):629–639, 1990.
-  R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In 26th Annual International Conference, pages 873–880, New York, USA, 2009. ACM.
-  C. Rogers and T. Moodie. Wave Phenomena: Modern Theory and Applications. North-Holland Mathematics Studies. Elsevier Science, 1984.
The perceptron: A probabilistic model for information storage and organization in the brain.Psychological review, 65(6):386–408, 1958.
-  L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear Total Variation Based Noise Removal Algorithms. Physica D, 60(1-4):259–268, 1992.
-  O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. Variational methods in imaging. Springer, New York, USA, 2009.
-  P. L. C. C. L. W. K. S. X. T. Shuo Yang. Deep Visual Representation Learning with Target Coding. In AAAI Conference on AI, pages 3848–3854, Jan. 2015.
-  J. Weickert. Anisotropic Diffusion in Image Processing. 2009.