1 Introduction
Deep learning has achieved great success in may machine learning tasks. The endtoend deep architectures have the ability to effectively extract features relevant to the given labels and achieve stateoftheart accuracy in various applications (Bengio, 2009). Network design is one of the central task in deep learning. Its main objective is to grant the networks with strong generalization power using as few parameters as possible. The first ultra deep convolutional network is the ResNet (He et al., 2015b) which has skip connections to keep feature maps in different layers in the same scale and to avoid gradient vanishing. Structures other than the skip connections of the ResNet were also introduced to avoid gradient vanishing, such as the dense connections (Huang et al., 2016a), fractal path (Larsson et al., 2016) and Dirac initialization (Zagoruyko & Komodakis, 2017). Furthermore, there has been a lot of attempts to improve the accuracy of image classifications by modifying the residual blocks of the ResNet. Zagoruyko & Komodakis (2016) suggested that we need to double the number of layers of ResNet to achieve a fraction of a percent improvement of accuracy. They proposed a widened architecture that can efficiently improve the accuracy. Zhang et al. (2017) pointed out that simply modifying depth or width of ResNet might not be the best way of architecture design. Exploring structural diversity, which is an alternative dimension in network design, may lead to more effective networks. In Szegedy et al. (2017), Zhang et al. (2017), Xie et al. (2017), Li et al. (2017) and Hu et al. (2017), the authors further improved the accuracy of the networks by carefully designing residual blocks via increasing the width of each block, changing the topology of the network and following certain empirical observations. In the literature, the network design is mainly empirical.It remains a mystery whether there is a general principle to guide the design of effective and compact deep networks.
Observe that each residual block of ResNet can be written as which is one step of forward Euler discretization (AppendixA.1) of the ordinary differential equation (ODE) (E, 2017). This suggests that there might be a connection between discrete dynamic systems and deep networks with skip connections. In this work, we will show that many stateoftheart deep network architectures, such as PolyNet (Zhang et al., 2017), FractalNet (Larsson et al., 2016) and RevNet (Gomez et al., 2017)
, can be consider as different discretizations of ODEs. From the perspective of this work, the success of these networks is mainly due to their ability to efficiently approximate dynamic systems. On a side note, differential equations is one of the most powerful tools used in lowlevel computer vision such as image denoising, deblurring, registration and segmentation
(Osher & Paragios, 2003; Aubert & Kornprobst, 2006; Chan & Shen, 2005). This may also bring insights on the success of deep neural networks in lowlevel computer vision. Furthermore, the connection between architectures of deep neural networks and numerical approximations of ODEs enables us to design new and more effective deep architectures by selecting certain discrete approximations of ODEs. As an example, we design a new network structure called linear multistep architecture (LMarchitecture) which is inspired by the linear multistep method in numerical ODEs (Ascher & Petzold, 1997). This architecture can be applied to any ResNetlike networks. In this paper, we apply the LMarchitecture to ResNet and ResNeXt (Xie et al., 2017) and achieve noticeable improvements on CIFAR and ImageNet with comparable numbers of trainable parameters. We also explain the performance gain using the concept of modified equations from numerical analysis.It is known in the literature that introducing randomness by injecting noise to the forward process can improve generalization of deep residual networks. This includes stochastic drop out of residual blocks (Huang et al., 2016b) and stochastic shakes of the outputs from different branches of each residual block (Gastaldi, 2017). In this work we show that any ResNetlike network with noise injection can be interpreted as a discretization of a stochastic dynamic system. This gives a relatively unified explanation to the stochastic learning process using stochastic control. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the proposed LMarchitecture. As an example, we introduce stochastic depth to LMResNet and achieve significant improvement over the original LMResNet on CIFAR10.
1.1 Related work
The link between ResNet (Figure 1(a)) and ODEs were first observed by E (2017), where the authors formulated the ODE as the continuum limit of the ResNet . Liao & Poggio (2016)
bridged ResNet with recurrent neural network (RNN), where the latter is known as an approximation of dynamic systems.
Sonoda & Murata (2017) and Li & Shi (2017) also regarded ResNet as dynamic systems that are the characteristic lines of a transport equation on the distribution of the data set. Similar observations were also made by Chang et al. (2017); they designed a reversible architecture to grant stability to the dynamic system. On the other hand, many deep network designs were inspired by optimization algorithms, such as the network LISTA (Gregor & LeCun, 2010) and the ADMMNet (Yang et al., 2016). Optimization algorithms can be regarded as discretizations of various types of ODEs (Helmke & Moore, 2012), among which the simplest example is gradient flow.Another set of important examples of dynamic systems is partial differential equations (PDEs), which have been widely used in lowlevel computer vision tasks such as image restoration. There were some recent attempts on combining deep learning with PDEs for various computer vision tasks, i.e. to balance handcraft modeling and datadriven modeling.
Liu et al. (2010) and Liu et al. (2013) proposed to use linear combinations of a series of handcrafted PDEterms and used optimal control methods to learn the coefficients. Later, Fang et al. (2017) extended their model to handle classification tasks and proposed an learned PDE model (LPDE). However, for classification tasks, the dynamics (i.e. the trajectories generated by passing data to the network) should be interpreted as the characteristic lines of a PDE on the distribution of the data set. This means that using spatial differential operators in the network is not essential for classification tasks. Furthermore, the discretizations of differential operators in the LPDE are not trainable, which significantly reduces the network’s expressive power and stability. Chen et al. (2015) proposed a feedforward network in order to learn the optimal nonlinear anisotropic diffusion for image denoising. Unlike the previous work, their network used trainable convolution kernels instead of fixed discretizations of differential operators, and used radio basis functions to approximate the nonlinear diffusivity function. More recently, Long et al. (2017) designed a network, called PDENet, to learn more general evolution PDEs from sequential data. The learned PDENet can accurately predict the dynamical behavior of data and has the potential to reveal the underlying PDE model that drives the observed data.In our work, we focus on a different perspective. First of all, we do not require the ODE associate to any optimization problem, nor do we assume any differential structures in . The optimal
is learned for a given supervised learning task. Secondly, we draw a relatively comprehensive connection between the architectures of popular deep networks and discretization schemes of ODEs. More importantly, we demonstrate that the connection between deep networks and numerical ODEs enables us to design new and more effective deep networks. As an example, we introduce the LMarchitecture to ResNet and ResNeXt which improves the accuracy of the original networks.
We also note that, our viewpoint enables us to easily explain why ResNet can achieve good accuracy by dropping out some residual blocks after training, whereas dropping off subsampling layers often leads to an accuracy drop (Veit et al., 2016)
. This is simply because each residual block is one step of the discretized ODE, and hence, dropping out some residual blocks only amounts to modifying the step size of the discrete dynamic system, while the subsampling layer is not a part of the ODE model. Our explanation is similar to the unrolled iterative estimation proposed by
Greff et al. (2016), while the difference is that we believe it is the datadriven ODE that does the unrolled iterative estimation.2 Numerical Differential Equation, Deep Networks and Beyond
In this section we show that many existing deep neural networks can be consider as different numerical schemes approximating ODEs of the form . Based on such observation, we introduce a new structure, called the linear multistep architecture (LMarchitecture) which is inspired by the wellknown linear multistep method in numerical ODEs. The LMarchitecture can be applied to any ResNetlike networks. As an example, we apply it to ResNet and ResNeXt and demonstrate the performance gain of such modification on CIFAR and ImageNet data sets.
2.1 Numerical Schemes And Network Architectures
PolyNet (Figure 1(b)), proposed by Zhang et al. (2017), introduced a PolyInception module in each residual block to enhance the expressive power of the network. The PolyInception model includes polynomial compositions that can be described as
We observe that PolyInception model can be interpreted as an approximation to one step of the backward Euler (implicit) scheme (AppendixA.1):
Indeed, we can formally rewrite as
Therefore, the architecture of PolyNet can be viewed as an approximation to the backward Euler scheme solving the ODE . Note that, the implicit scheme allows a larger step size (Ascher & Petzold, 1997), which in turn allows fewer numbers of residual blocks of the network. This explains why PolyNet is able to reduce depth by increasing width of each residual block to achieve stateoftheart classification accuracy.
FractalNet (Larsson et al., 2016) (Figure 1(c)) is designed based on selfsimilarity. It is designed by repeatedly applying a simple expansion rule to generate deep networks whose structural layouts are truncated fractals. We observe that, the macrostructure of FractalNet can be interpreted as the wellknown RungeKutta scheme in numerical analysis. Recall that the recursive fractal structure of FractalNet can be written as . For simplicity of presentation, we only demonstrate the FractalNet of order 2 (i.e. ). Then, every block of the FractalNet (of order 2) can be expressed as
which resembles the RungeKutta scheme of order 2 solving the ODE (see AppendixA.2).
RevNet(Figure 1(d)), proposed by Gomez et al. (2017), is a reversible network which does not require to store activations during forward propagations. The RevNet can be expressed as the following discrete dynamic system
RevNet can be interpreted as a simple forward Euler approximation to the following dynamic system
Note that reversibility, which means we can simulate the dynamic from the end time to the initial time, is also an important notation in dynamic systems. There were also attempts to design reversible scheme in dynamic system such as Nguyen & Mcmechan (2015).
Network  Related ODE  Numerical Scheme 

ResNet, ResNeXt, etc.  Forward Euler scheme  
PolyNet  Approximation of backward Euler scheme  
FractalNet  RungeKutta scheme  
RevNet  Forward Euler scheme  

2.2 LMResNet: A New Deep Architecture From Numerical Differential Equation
We have shown that architectures of some successful deep neural networks can be interpreted as different discrete approximations of dynamic systems. In this section, we proposed a new structure, called linear multistep structure (LMarchitecture), based on the wellknown linear multistep scheme in numerical ODEs (which is briefly recalled in Appendix A.3). The LMarchitecture can be written as follows
(1) 
where is a trainable parameter for each layer . A schematic of the LMarchitecture is presented in Figure 2. Note that the midpoint and leapfrog network structures in Chang et al. (2017) are all special case of ours. The LMarchitecture is a 2step method approximating the ODE . Therefore, it can be applied to any ResNetlike networks, including those mentioned in the previous section. As an example, we apply the LMarchitecture to ResNet and ResNeXt. We call these new networks the LMResNet and LMResNeXt. We trained LMResNet and LMResNeXt on CIFAR (Krizhevsky & Hinton, 2009) and Imagenet (Russakovsky et al., 2014), and both achieve improvements over the original ResNet and ResNeXt.
Implementation Details. For the data sets CIFAR10 and CIFAR100, we train and test our networks on the training and testing set as originally given by the data set. For ImageNet, our models are trained on the training set with 1.28 million images and evaluated on the validation set with 50k images. On CIFAR, we follow the simple data augmentation in Lee et al. (2015)
for training: 4 pixels are padded on each side, and a 32
32 crop is randomly sampled from the padded image or its horizontal flip. For testing, we only evaluate the single view of the original 3232 image. Note that the data augmentation used by ResNet (He et al., 2015b; Xie et al., 2017) is the same as Lee et al. (2015). On ImageNet, we follow the practice in Krizhevsky et al. (2012); Simonyan & Zisserman (2014). Images are resized with its shorter side randomly sampled in for scale augmentation (Simonyan & Zisserman, 2014). The input image is randomly cropped from a resized image using the scale and aspect ratio augmentation of Szegedy et al. (2015). For the experiments of ResNet/LMResNet on CIFAR, we adopt the original design of the residual block in He et al. (2016), i.e. using a small twolayer neural network as the residual block with bnreluconvbnreluconv. The residual block of LMResNeXt (as well as LMResNet164) is the bottleneck structure used by
(Xie et al., 2017) that takes the form . We start our networks with a singleconv layer, followed by 3 residual blocks, global average pooling and a fullyconnected classifier. The parameters
of the LMarchitecture are initialized by random sampling from . We initialize other parameters following the method introduced by He et al. (2015a). On CIFAR, we use SGD with a minibatch size of 128, and 256 on ImageNet. During training, we apply a weight decay of 0.0001 for LMResNet and 0.0005 for LMResNeXt, and momentum of 0.9 on CIFAR. We apply a weight decay of 0.0001 and momentum of 0.9 for both LMResNet and LMResNeXt on ImageNet. For LMResNet on CIFAR10 (CIFAR100), we start with the learning rate of 0.1, divide it by 10 at 80 (150) and 120 (225) epochs and terminate training at 160 (300) epochs. For LMResNeXt on CIFAR, we start with the learning rate of 0.1 and divide it by 10 at 150 and 225 epochs, and terminate training at 300 epochs.
Model  Layer  Error  Params  Dataset 

ResNet (He et al. (2015b))  20  8.75  0.27M  CIFAR10 
ResNet (He et al. (2015b))  32  7.51  0.46M  CIFAR10 
ResNet (He et al. (2015b))  44  7.17  0.66M  CIFAR10 
ResNet (He et al. (2015b))  56  6.97  0.85M  CIFAR10 
ResNet (He et al. (2016))  110, preact  6.37  1.7M  CIFAR10 
LMResNet (Ours)  20, preact  8.33  0.27M  CIFAR10 
LMResNet (Ours)  32, preact  7.18  0.46M  CIFAR10 
LMResNet (Ours)  44, preact  6.66  0.66M  CIFAR10 
LMResNet (Ours)  56, preact  6.31  0.85M  CIFAR10 
ResNet (Huang et al. (2016b))  110, preact  27.76  1.7M  CIFAR100 
ResNet (He et al. (2016))  164, preact  24.33  2.55M  CIFAR100 
ResNet (He et al. (2016))  1001, preact  22.71  18.88M  CIFAR100 
FractalNet (Larsson et al. (2016))  20  23.30  38.6M  CIFAR100 
FractalNet (Larsson et al. (2016))  40  22.49  22.9M  CIFAR100 
DenseNet (Huang et al., 2016a)  100  19.25  27.2M  CIFAR100 
DenseNetBC (Huang et al., 2016a)  190  17.18  25.6M  CIFAR100 
ResNeXt (Xie et al. (2017))  29(864d)  17.77  34.4M  CIFAR100 
ResNeXt (Xie et al. (2017))  29(1664d)  17.31  68.1M  CIFAR100 
ResNeXt (Our Implement)  29(1664d), preact  17.65  68.1M  CIFAR100 
LMResNet (Ours)  110, preact  25.87  1.7M  CIFAR100 
LMResNet (Ours)  164, preact  22.90  2.55M  CIFAR100 
LMResNeXt (Ours)  29(864d), preact  17.49  35.1M  CIFAR100 
LMResNeXt (Ours)  29(1664d), preact  16.79  68.8M  CIFAR100 
Results. Testing errors of our proposed LMResNet/LMResNeXt and some other deep networks on CIFAR are presented in Table 2. Figure 2 shows the overall improvements of LMResNet over ResNet on CIFAR10 with varied number of layers. We also observe noticeable improvements of both LMResNet and LMResNeXt on CIFAR100. Xie et al. (2017) claimed that ResNeXt can achieve lower testing error without preactivation (preact). However, our results show that LMResNeXt with preact achieves lower testing errors even than the original ResNeXt without preact. Training and testing curves of LMResNeXt are plotted in Figure3. In Table 2, we also present testing errors of FractalNet and DenseNet (Huang et al., 2016a) on CIFAR 100. We can see that our proposed LMResNeXt29 has the best result. Comparisons between LMResNet and ResNet on ImageNet are presented in Table 3. The LMResNet shows improvement over ResNet with comparable number of trainable parameters. Note that the results of ResNet on ImageNet are obtained from “https://github.com/KaimingHe/deepresidualnetworks”. It is worth noticing that the testing error of the 56layer LMResNet is comparable to that of the 110layer ResNet on CIFAR10; the testing error of the 164layer LMResNet is comparable to that of the 1001layer ResNet on CIFAR100; the testing error of the 50layer LMResNet is comparable to that of the 101layer ResNet on ImageNet. We have similar results on LMResNeXt and ResNeXt as well. These results indicate that the LMarchitecture can greatly compress ResNet/ResNeXt without losing much of the performance. We will justify this mathematically at the end of this section using the concept of modified equations from numerical analysis.
Model  Layer  top1  top5 

ResNet (He et al. (2015b))  50  24.7  7.8 
ResNet (He et al. (2015b))  101  23.6  7.1 
ResNet (He et al. (2015b))  152  23.0  6.7 
LMResNet (Ours)  50, preact  23.8  7.0 
LMResNet (Ours)  101, preact  22.6  6.4 
Explanation on the performance boost via modified equations. Given a numerical scheme approximating a differential equation, its associated modified equation (Warming & Hyett, 1974) is another differential equation to which the numerical scheme approximates with higher order of accuracy than the original equation. Modified equations are used to describe numerical behaviors of numerical schemes. For example, consider the simple 1dimensional transport equation . Both the LaxFriedrichs scheme and LaxWendroff scheme approximates the transport equation. However, the associated modified equations of LaxFriedrichs and LaxWendroff are
respectively, where . This shows that the LaxFriedrichs scheme behaves diffusive, while the LaxWendroff scheme behaves dispersive. Consider the forward Euler scheme which is associated to ResNet, . Note that
Thus, the modified equation of forward Euler scheme reads as
(2) 
Consider the numerical scheme used in the LMstructure . By Taylor’s expansion, we have
Then, the modified equation of the numerical scheme associated to the LMstructure
(3) 
Comparing (2) with (3), we can see that when , the second order term of (3) is bigger than that of (2). The term represents acceleration which leads to acceleration of the convergence of when Su & Boyd (2015); Wilson et al. (2016). When with being an elliptic operator, the term introduce dispersion on top of the dissipation, which speeds up the flow of (Dong et al., 2017). In fact, this is our original motivation of introducing the LMarchitecture (1). Note that when the dynamic is truly a gradient flow, i.e. , the difference equation of the LMstructure has a stability condition . In our experiments, we do observe that most of the coefficients are lying in (Figure 4). Moreover, the network is indeed accelerating at the end of the dynamic, for the learned parameters are negative and close to (Figure 4).
3 Stochastic Learning Strategy: A Stochastic Dynamic System Perspective
Although the original ResNet (He et al., 2015b) did not use dropout, several work (Huang et al., 2016b; Gastaldi, 2017) showed that it is also beneficial to inject noise during training. In this section we show that we can regard such stochastic learning strategy as an approximation to a stochastic dynamic system. We hope the stochastic dynamic system perspective can shed lights on the discovery of a guiding principle on stochastic learning strategies. To demonstrate the advantage of bridging stochastic dynamic system with stochastic learning strategy, we introduce stochastic depth during training of LMResNet. Our results indicate that the networks with proposed LMarchitecture can also greatly benefit from stochastic learning strategies.
3.1 Noise Injection and Stochastic Dynamic Systems
As an example, we show that the two stochastic learning methods introduced in Huang et al. (2016b) and Gastaldi (2017) can be considered as weak approximations of stochastic dynamic systems.
ShakeShake Regularization. Gastaldi (2017) introduced a stochastic affine combination of multiple branches in a residual block, which can be expressed as
where . To find its corresponding stochastic dynamic system, we incorporate the time step size and consider
(4) 
which reduces to the shakeshake regularization when . The above equation can be rewritten as
Since the random variable
, following the discussion in Appendix B, the network of the shakeshake regularization is a weak approximation of the stochastic dynamic systemwhere is an dimensional Brownian motion, is an
dimensional vector whose elements are all 1s,
is the dimension of and , and denotes the pointwise product of vectors. Note from (4) that we have alternatives to the original shakeshake regularization if we choose .Stochastic Depth. Huang et al. (2016b) randomly drops out residual blocks during training in order to reduce training time and improve robustness of the learned network. We can write the forward propagation as
where . By incorporating , we consider
which reduces to the original stochastic drop out training when
. The variance of
is 1. If we further assume that , the condition(5) of Appendix B.2 is satisfied for small . Then, following the discussion in Appendix B, the network with stochastic drop out can be seen as a weak approximation to the stochastic dynamic systemNote that the assumption also suggests that we should set closer to for deeper blocks of the network, which coincides with the observation made by Huang et al. (2016b, Figure 8).
In general, we can interpret stochastic training procedures as approximations of the following stochastic control problem with running cost
where
is the loss function,
is the terminal time of the stochastic process, and is a regularization term.Model  Layer  Training Strategy  Error 

ResNet(He et al. (2015b)) 
110  Original  6.61 
ResNet(He et al. (2016))  110,preact  Orignial  6.37 
ResNet(Huang et al. (2016b))  56  Stochastic depth  5.66 
ResNet(Our Implement)  56,preact  Stochastic depth  5.55 
ResNet(Huang et al. (2016b))  110  Stochastic depth  5.25 
ResNet(Huang et al. (2016b))  1202  Stochastic depth  4.91 
LMResNet(Ours)  56,preact  Stochastic depth  5.14 
LMResNet(Ours)  110,preact  Stochastic depth  4.80 

3.2 Stochastic Training for Networks with LMarchitecture
In this section, we extend the stochastic depth training strategy to networks with the proposed LMarchitecture. In order to apply the theory of Itô process, we consider the 2nd order (which is related to the modified equation of the LMstructure (3)) and rewrite it as a 1st order ODE system
Following a similar argument as in the previous section, we obtain the following stochastic process
which can be weakly approximated by
where . Taking , we obtain the following stochastic training strategy for LMarchitecture
The above derivation suggests that the stochastic learning for networks using LMarchitecture can be implemented simply by randomly dropping out the residual block with probability
.Implementation Details. We test LMResNet with stochastic training strategy on CIFAR10. In our experiments, all hyperparameters are selected exactly the same as in (Huang et al., 2016b). The probability of dropping out a residual block at each layer is a linear function of the layer, i.e. we set the probability of dropping the current residual block as , where is the current layer of the network, is the depth of the network and is the dropping out probability of the previous layer. In our experiments, we select for LMResNet56 and for LMResNet110. During training with SGD, the initial learning rate is 0.1, and is divided by a factor of 10 after epoch 250 and 375, and terminated at 500 epochs. In addition, we use a weight decay of 0.0001 and a momentum of 0.9.
Results. Testing errors are presented in Table 4. Training and testing curves of LMResNet with stochastic depth are plotted in Figure5. Note that LMResNet110 with stochastic depth training strategy achieved a 4.80% testing error on CIFAR10, which is even lower that the ResNet1202 reported in the original paper. The benefit of stochastic training has been explained from difference perspectives, such as Bayesian (Kingma et al., 2015) and information theory (ShwartzZiv & Tishby, 2017; Achille & Soatto, 2016). The stochastic Brownian motion involved in the aforementioned stochastic dynamic systems introduces diffusion which leads to information gain and robustness.
4 Conclusion and Discussion
In this paper, we draw a relatively comprehensive connection between the architectures of popular deep networks and discretizations of ODEs. Such connection enables us to design new and more effective deep networks. As an example, we introduce the LMarchitecture to ResNet and ResNeXt which improves the accuracy of the original networks, and the proposed networks also outperform FractalNet and DenseNet on CIFAR100. In addition, we demonstrate that networks with stochastic training process can be interpreted as a weak approximation to stochastic dynamic systems. Thus, networks with stochastic learning strategy can be casted as a stochastic control problem, which we hope to shed lights on the discovery of a guiding principle on the stochastic training process. As an example, we introduce stochastic depth to LMResNet and achieve significant improvement over the original LMResNet on CIFAR10.
As for our future work, if ODEs are considered as the continuum limits of deep neural networks (neural networks with infinite layers), more tools from mathematical analysis can be used in the study of neural networks. We can apply geometry insights, physical laws or smart design of numerical schemes to the design of more effective deep neural networks. On the other hand, numerical methods in control theory may inspire new optimization algorithms for network training. Moreover, stochastic control gives us a new perspective on the analysis of noise injections during network training.
Acknowledgments
Bin Dong is supported in part by NSFC 11671022 and The National Key Research and Development Program of China 2016YFC0207700. Yiping Lu is supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. Quanzheng Li is supported in part by the National Institutes of Health under Grant R01EB013293 and Grant R01AG052653.
References
 Achille & Soatto (2016) Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. arXiv preprint arXiv:1611.01353, 2016.
 Ascher & Petzold (1997) Uri M. Ascher and Linda R. Petzold. Computer Methods for Ordinary Differential Equations and DifferentialAlgebraic Equations. SIAM: Society for Industrial and Applied Mathematics, 1997.
 Aubert & Kornprobst (2006) Gilles Aubert and Pierre Kornprobst. Mathematical problems in image processing: partial differential equations and the calculus of variations, volume 147. Springer Science & Business Media, 2006.
 Bengio (2009) Yoshua Bengio. Learning deep architectures for ai. Foundations & Trends in Machine Learning, 2(1):1–127, 2009.
 Chan & Shen (2005) Tony F. Chan and Jianhong Shen. Image processing and analysis: variational, PDE, wavelet, and stochastic methods. Society for Industrial Mathematics, 2005.
 Chang et al. (2017) Bo Chang, Lili Meng, Eldad Haber, Lars Ruthotto, David Begert, and Elliot Holtham. Reversible architectures for arbitrarily deep residual neural networks. arXiv preprint arXiv:1709.03698, 2017.

Chen et al. (2015)
Yunjin Chen, Wei Yu, and Thomas Pock.
On learning optimized reaction diffusion processes for effective
image restoration.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 5261–5269, 2015.  Dong et al. (2017) Bin Dong, Qingtang Jiang, and Zuowei Shen. Image restoration: wavelet frame shrinkage, nonlinear evolution pdes, and beyond. Multiscale Modeling & Simulation, 15(1):606–660, 2017.
 E (2017) Weinan E. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11, 2017.
 Evans (2013) Lawrence C Evans. An introduction to stochastic differential equations. American Mathematical Society, 2013.

Fang et al. (2017)
Cong Fang, Zhenyu Zhao, Pan Zhou, and Zhouchen Lin.
Feature learning via partial differential equation with applications to face recognition.
Pattern Recognition, 69(C):14–25, 2017.  Gastaldi (2017) Xavier Gastaldi. Shakeshake regularization. ICLR Workshop, 2017.

Gomez et al. (2017)
Aidan N. Gomez, Mengye Ren, Raquel Urtasun, and Roger B. Grosse.
The reversible residual network: Backpropagation without storing activations.
Advances in Neural Information Processing Systems, 2017.  Greff et al. (2016) Klaus Greff, Rupesh K Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrolled iterative estimation. ICLR, 2016.
 Gregor & LeCun (2010) Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. International Conference on Machine Learning, 2010.
 He et al. (2015a) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. IEEE International Conference on Computer Vision, 2015a.
 He et al. (2015b) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, 2015b.
 He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 Helmke & Moore (2012) Uwe Helmke and John B Moore. Optimization and dynamical systems. Springer Science & Business Media, 2012.
 Hu et al. (2017) Jie Hu, Li Shen, and Gang Sun. Squeezeandexcitation networks. arXiv preprint arXiv:1709.01507, 2017.
 Huang et al. (2016a) Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.
 Huang et al. (2016b) Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. European Conference on Computer Vision, 2016b.
 Kesendal (2000) BØ Kesendal. Stochastic differential equations, an introduction with applicatins, 2000.
 Kingma et al. (2015) Diederik P. Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. Advances in Neural Information Processing Systems, 2015.
 Kloeden & Pearson (1992) P. E. Kloeden and R. A. Pearson. Numerical solution of stochastic differential equations. Springer Berlin Heidelberg, 1992.
 Krizhevsky & Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.

Krizhevsky et al. (2012)
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.
In Advances in neural information processing systems, pp. 1097–1105, 2012.  Larsson et al. (2016) Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultradeep neural networks without residuals. ICLR, 2016.
 Lee et al. (2015) ChenYu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeplysupervised nets. In Artificial Intelligence and Statistics, pp. 562–570, 2015.
 Li et al. (2017) Yanghao Li, Naiyan Wang, Jiaying Liu, and Xiaodi Hou. Factorized bilinear models for image recognition. ICCV, 2017.
 Li & Shi (2017) Zhen Li and Zuoqiang Shi. Deep residual learning and pdes on manifold. arXiv preprint arXiv:1708.05115, 2017.
 Liao & Poggio (2016) Qianli Liao and Tomaso Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. arXiv preprint, arXiv:1604.03640, 2016.
 Liu et al. (2010) Risheng Liu, Zhouchen Lin, Wei Zhang, and Zhixun Su. Learning pdes for image restoration via optimal control. European Conference on Computer Vision, 2010.
 Liu et al. (2013) Risheng Liu, Zhouchen Lin, Wei Zhang, Kewei Tang, and Zhixun Su. Toward designing intelligent pdes for computer vision: An optimal control approach. Image and vision computing, 31(1):43–56, 2013.
 Long et al. (2017) Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. Pdenet:learning pdes frome data. arXiv preprint arXiv:1710.09668, 2017.
 Nguyen & Mcmechan (2015) Bao D Nguyen and George A Mcmechan. Five ways to avoid storing source wavefield snapshots in 2d elastic prestack reverse time migration. Geophysics, 80(1):S1–S18, 2015.
 Osher & Paragios (2003) S. Osher and N. Paragios. Geometric level set methods in imaging, vision, and graphics. SpringerVerlag New York Inc, 2003.
 Russakovsky et al. (2014) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2014.
 ShwartzZiv & Tishby (2017) Ravid ShwartzZiv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
 Simonyan & Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 Sonoda & Murata (2017) Sho Sonoda and Noboru Murata. Double continuum limit of deep neural networks. ICML Workshop Principled Approaches to Deep Learning, 2017.
 Su & Boyd (2015) Weijie Su and Stephen Boyd. A differential equation for modeling nesterov’s accelerated gradient method: theory and insights. Advances in Neural Information Processing Systems, 2015.
 Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.

Szegedy et al. (2017)
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi.
Inceptionv4, inceptionresnet and the impact of residual connections on learning.
AAAI, 2017.  Veit et al. (2016) Andreas Veit, Michael Wilber, and Serge Belongie. Residual networks are exponential ensembles of relatively shallow networks. Advances in Neural Information Processing Systems, 2016.
 Warming & Hyett (1974) R. F Warming and B. J Hyett. The modified equation to the stability and accuracy analysis of finitedifference methods. Journal of Computational Physics, 14(2):159–179, 1974.
 Wilson et al. (2016) Ashia C. Wilson, Benjamin Recht, and Michael I. Jordan. A lyapunov analysis of momentum methods in optimization. arXiv preprint arXiv:1611.02635, 2016, 2016.
 Xie et al. (2017) Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. IEEE Conference on Computer Vision and Pattern Recognition, 2017.
 Yang et al. (2016) Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Deep ADMMNet for compressive sensing MRI. Advances in Neural Information Processing Systems, 2016.
 Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. The British Machine Vision Conference, 2016.
 Zagoruyko & Komodakis (2017) Sergey Zagoruyko and Nikos Komodakis. Diracnets: Training very deep neural networks without skipconnections. arXiv preprint arXiv:1706.00388, 2017.
 Zhang et al. (2017) Xingcheng Zhang, Zhizhong Li, Change Loy, Chen, and Dahua Lin. Polynet: A pursuit of structural diversity in very deep networks. IEEE Conference on Computer Vision and Pattern Recognition, 2017.
Appendix
Appendix A Numerical ODE
In this section we briefly recall some concepts from numerical ODEs that are used in this paper. The ODE we consider takes the form . Interested readers should consult Ascher & Petzold (1997) for a comprehensive introduction to the subject.
a.1 Forward and Backward Euler Method
The simplest approximation of is to discretize the time derivative by and approximate the right hand side by . This leads to the forward (explicit) Euler scheme
If we approximate the right hand side of the ODE by , we obtain the backward (implicit) Euler scheme
The backward Euler scheme has better stability property than the forward Euler, though we need to solve a nonlinear equation at each step.
a.2 RungeKutta Method
RungeKutta method is a set of higher order one step methods, which can be formulate as
Here, is an intermediate approximation to the solution at time , and the coefficients can be adjusted to achieve higher order accuracy. As an example, the popular 2ndorder RungeKutta takes the form
a.3 Linear Multistep Method
Liear multistep method generalizes the classical forward Euler scheme to higher orders. The general form of a step linear multistep method is given by
where, are scalar parameters and . The linear multistep method is explicit if , which is what we used to design the linear multistep structure.
Appendix B Numerical Schemes For Stochastic Differential Equations
b.1 Itô Process
In this section we follow the setting of Kesendal (2000) and Evans (2013). We first give the definition of Brownian motion. The Brownian motion is a stochastic process satisfies the following assumptions

a.s.,

is for all ,

For all time instances , the random variables are independent. (Also known as independent increments.)
The Itô process satisfies , where denotes the standard Brownian motion. We can write as the following integral equation
Here is the Itô’s integral which can be defined as the limit of the sum
on the partition of .
b.2 Weak Convergence of Numerical Schemes
Given the Itô process satisfying , where is the standard Brownian motion, we can approximate the equation using the forward Euler scheme (Kloeden & Pearson, 1992)
Here, following the definition of Itô integral, is a Gaussian random variable with variance . It is known that the forward Euler scheme converges strongly to the Itô process. Note from (Kloeden & Pearson, 1992, Chapter 11) that if we replace by a random variable
from a nonGaussian distribution, the forward Euler scheme becomes the socalled
simplified weak Euler scheme. The simplified weak Euler scheme converges weakly to the Itô process if satisfies the following condition(5) 
One can verify that the random variable from the uniform distribution over a interval with 0 mean, or a (properly scaled) Bernoulli random variable taking value 1 or 1 satisfies the condition (
5).
Comments
There are no comments yet.