1 Introduction
Recently there has been a growing interest in discovering governing equations numerically using observational data. Earlier efforts include methods using symbolic regression ([5, 43]), equationfree modeling [24], heterogeneous multiscale method (HMM) ([15]), artificial neural networks ([19]), nonlinear regression ([50]), empirical dynamic modeling ([46, 53]), nonlinear Laplacian spectral analysis ([18]), automated inference of dynamics ([44, 12, 13]), etc. More recent efforts start to cast the problem into a function approximation problem, where the unknown governing equations are treated as target functions relating the data for the state variables and their time derivatives. The majority of the methods employ certain sparsitypromoting algorithms to create parsimonious models from a large set of dictionary for all possible models, so that the true dynamics could be recovered exactly ([47]). Many studies have been conducted to effectively deal with noises in data ([7, 40]), corruptions in data ([48]
), partial differential equations
[38, 41], etc. Methods have also been developed in conjunction with model selection approach ([28]), Koopman theory ([6]), and Gaussian process regression ([35]), to name a few. A more recent work resorts to the more traditional means of approximation by using orthogonal polynomials ([52]). The approach seeks accurate numerical approximation to the underlying governing equations, instead of their exact recovery. By doing so, many existing results in polynomial approximation theory can be applied, particularly those on sampling strategies. It was shown in [52] that data from a large number of short bursts of trajectories are more effective for equation recovery than those from a single long trajectory.On the other hand, artificial neural network (ANN), and particularly deep neural network (DNN), has seen tremendous successes in many different disciplines. The number of publications is too large to mention. Here we cite only a few relatively more recent review/summary type publications [30, 4, 16, 32, 14, 20, 42]. Efforts have been devoted to the use of ANN for various aspects of scientific computing, including construction of reduced order model ([22]), aiding solution of conservation laws ([37]), multiscale problems ([8, 51]), solving and learning systems involving ODEs and PDEs ([29, 11, 27, 25]), uncertainty quantification ([49, 54]), etc.
The focus of this paper is on the approximation/learning of dynamical systems using deep neural networks (DNN). The topic has been explored in a series of recent articles, in the context of ODEs [36, 39]) and PDEs ([34, 33, 27]). The new contributions of this paper include the following. First, we introduce new constructions of deep neural network (DNN), specifically suited for learning dynamical systems. In particular, our new network structures employ residual network (ResNet), which was first proposed in [21] for image analysis and has become very popular due to its effectiveness. In our construction, we employ a ResNet block, which consists of multiple fully connected hidden layers, as the fundamental building block of our DNN structures. We show that the ResNet block can be considered as a onestep numerical integrator in time. This integrator is “exact” in time, i.e., no temporal error, in the sense that the only error stems from the neural network approximation of the evolution operators defining the governing equation. This is different from a few existing work where ResNet is viewed as the Euler forward scheme ([9]). Secondly, we introduce two variations of the ResNet structure to serve as multistep learning of the underlying governing equations. The first one employs recurrent use of the ResNet block. This is inspired by the well known recurrent neural network (RNN), whose connection with dynamical systems has long been recognized, cf. [20]. Our recurrent network, termed RTResNet hereafter, is different in the sense that the recurrence is enforced blockwise on the ResNet block, which by itself is a DNN. (Note that in the traditional RNN, the recurrence is enforced on the hidden layers.) We show that the RTResNet is a multistep integrator that is exact in time, with the only error stemming from the ResNet approximation of the evolution operator of the underlying equation. The other variation of the ResNet approximator employs recursive use of the ResNet block, termed RSResNet. Again, the recursion is enforced blockwise on the ResNet block (which is a DNN). We show that the RSResNet is also an exact multistep integrator. The difference between RTResNet and RSResNet is that the former is equivalent to a multistep integrator using an uniform time step, whereas the latter is an “adaptive” method with variable time steps depending on the particular problem and data. Thirdly, the derivations in this paper utilize integral form of the underlying dynamical system. By doing so, the proposed methods do not require knowledge or data of the time derivatives of the equation states. This is different from most of the existing studies (cf. [5, 7, 40, 52]), which deal with the equations directly and thus require time derivative data. Acquiring time derivatives introduces an additional source for noises and errors, particularly when one has to conduct numerical differentiation of noisy trajectory data. Consequently, the proposed three new DNN structures, the onestep ResNet and multistep RTResNet and RSResNet, are capable of approximating unknown dynamical systems using only state variable data, which could be relatively coarsely distributed in time. In this case, most of the existing methods become less effective, as accurate extration of time derivatives is difficult.
This paper is organized as follows. After the basic problem setup in Section 2, we present the main methods in Section 3 and some theoretical properties in Section 4. We then present, in Section 5, a set of numerical examples, covering both linear and nonlinear differential equations, to demonstrate the effectiveness of the proposed algorithms.
2 Setup
Let us consider an autonomous system
(1) 
where are the state variables. Let be the flow map. The solution can be written as
(2) 
Note that for autonomous systems the time variable can be arbitrarily shifted and only the time difference, or time lag, is relevant. Hereafter we will omit in the exposition, unless confusion arises.
In this paper, we assume the form of the governing equations is unknown. Our goal is to create an accurate model for the governing equation using data of the solution trajectories. In particular, we assume data are collected in the form of pairs, each of which corresponds to the solution states along one trajectory at two different time instances. That is, we consider the set
(3) 
where is the total number of data pairs, and for each pair ,
(4) 
Here the terms and stand for the potential noises in the data, and is the time lag between the two states. For notational convenience, we assume to be a constant for all throughout this paper. Consequently, the data set becomes inputoutput measurements of the lag flow map,
(5) 
3 Deep Neural Network Approximation
The core building block of our methods is a standard fully connected feedforward neural network (FNN) with layers, of which are hidden layers. It has been established that fully connected FNN can approximate arbitrarily well a large class of inputoutput maps, i.e., they are universal approximators, cf. [31, 2, 23]. Since the righthandside of (1) is our approximation goal, we will consider map. Let ,
, be the number of neurons in each layer, we then have
.Let be the operator of this network. For any input , the output of the network is
(6) 
where is the parameter set including all the parameters in the network. The operator is a composition of the following operators
(7) 
where is a matrix for the weight parameters connecting the neurons from th layer to
th layer, after using the standard approach of augmenting the biases into the weights. The activation function
is applied componentwise to theth layer. There exist many choices for the activation functions, e.g., sigmoid functions, ReLU (rectified linear unit), etc. In this paper we use a sigmoid function, in particular, the
function, in all layers, except at the output layer . This is one of the common choices for DNN.Using the data set (3), we can directly train (6) to approximate the lag flow map (5). This can be done by applying (6) with to obtain for each
, and then minimizing following mean squared loss function
(8) 
where
denotes vector 2norm hereafter. With a slight abuse of notation, hereafter we will write
to stand for for all sample data , unless confusion arises otherwise.3.1 Onestep ResNet Approximation
We now present the idea of using residual neural network (ResNet) as a onestep approximation method. The idea of ResNet is to explicitly introduce the identity operator in the network and force the network to effectively approximate the “residue” of the inputoutput map. Although mathematically equivalent, this simple transformation has been shown to be highly advantageous in practice and become increasingly popular, after its formal introduction in [21].
The structure of the ResNet is illustrated in Figure 1. The ResNet block consists of fully connected hidden layers and an identity operator to reintroduce the input back into the output of the hidden layers. This effectively produces the following mapping
(9) 
where are the weight and bias parameters in the network. The parameters are determined by minimizing the same loss function (8). This effectively accomplishes the training of the operator .
The connection between dynamical systems and ResNet has been recognized. In fact, ResNet has been viewed as the Euler forward time integrator ([9]). To further examine its property, let us consider the exact lag flow map,
(10) 
This is a trivial derivation using the mean value theorem. For notational convenience, we now define “effective increment”. For a given autonomous system (1), given an initial state and an increment , then its effective increment of size is defined as
(11) 
for some such that
(12) 
Note that the effective increment depends only on its initial state , once the governing equation and the increment are fixed.
Upon comparing the exact state (12) and the onestep ResNet method (9), it is thus easy to see that a successfully trained network operator is an approximation to the effective increment , i.e.,
(13) 
Since the effective increment completely determines the true solution states on a interval, we can then use the ResNet operator to approximate the solution trajectory. That is, starting with a given initial state , we can time march the state
(14) 
This discrete dynamical system serves as our approximation to the true dynamical system (1). It gives us an approximation to the true states on a uniform time grids with stepsize .
Remark 3.1
Even though the approximate system (14) resembles the well known Euler forward time stepping scheme, it is not a firstorder method in time. In fact, upon comparing (14) and the true state (12), it is easy to see that (14) is “exact” in term of temporal integration. The only source of error in the system (14) is the approximation error of the effective increment in (13). The size of this error is determined by the quality of the data and the network training algorithm.
Remark 3.2
The derivation here is based on (12), which is from the integral form of the governing equation. As a result, training of the ResNet method does not require data on the time derivatives of the true states. Moreover, does not need to be exceedingly small (to enable accurate numerical differentiation in time). This makes the ResNet method suitable for problems with relatively coarsely distributed data.
3.2 Multistep Recurrent ResNet (RTResNet) Approximation
We now combine the idea of recurrent neural network (RNN) and the ResNet method from the previous section. The distinct feature of our construction is that the recurrence is applied to the entire ResNet block, rather than to the individual hidden layers, as is done for the standard RNNs. (For an overview of RNN, interested readers are referred to [20], Ch. 10.)
The structure of the resulting Recurrent ResNet (RTResNet) is shown in Figure 2. The ResNet block, as presented in Figure 1, is “repeated” () times, for an integer , before producing the output . This makes the occurrence of the ResNet block a total of times. The unfolded structure is shown on the right of Figure 2. The RTResNet then produces the following scheme, for ,
(15) 
The network is then trained by using the data set (3) and minimizing the same loss function (8). For , this reduces to the onestep ResNet method (9).
To examine the properties of the RTResNet, let us consider a unform discretization of the time lag . That is, let , and consider, , . The exact solution state satisfies the following relation
(16) 
where is the effective increment defined of size , as defined in Definition 10.
Upon comparing this with the RTResNet scheme (15), it is easy to see that training the RTResNet is equivalent to finding the operator to approximate the effective increment,
(17) 
Similar to the onestep ResNet method, the multistep RTResNet is also exact in time, as it contains no temporal discretization error. The only error stems from the approximation of the effective increment.
Once the RTResNet is successfully trained, it gives us a discrete dynamical system (15
) that can be further marched in time using any initial state. This is an approximation to the true dynamical system on uniformly distributed time instances with an interval
. Therefore, even though the training data are given over time interval, the RTResNet system can produce solution states on finer time grids with a step size ().3.3 Multistep Recursive ResNet Approximation
We now present another multistep approximation method based on the ResNet block in Figure 1. The structure of the network is shown in Figure 3. From the input , ResNet blocks are recursively used a total of times, before producing the output . The network, referred to as recursive ResNet (RSResNet) hereafter, thus produces the following scheme, for any ,
(18) 
Compared to the recurrent RTResNet method (15) from the previous section, the major difference in RSResNet is that each ResNet block inside the network has its own parameter sets and thus are different from each other. Since each ResNet is a DNN by itself, the RSResNet can be a very deep network when . When , it also reduces back to the onestep ResNet network.
Let be an arbitrarily distributed time instances in and , , be the (nonuniform) increments. It is then straightforward to see that the exact state satisfies
(19) 
where is the effective increment defined in Definition 10.
Upon comparing with the RSResNet scheme (18), one can see that the training of the RSResNet produces the following approximation
(20) 
That is, each ResNet operator is an approximation of an effective increment of size , for , under the condition . Training the network using the data (3) and loss function (8) will determine the parameter sets , and subsequently the effective increments with size , for , From this perspective, one may view RSResNet as an “adaptive” method, as it adjusts its parameter sets to approximate smaller effective increments whose increments are determined by the data. Since RSResNet is a very deep network with a large number of parameters, it is, in principle, capable of producing more accurate results than ResNet and RTResNet, assuming cautions have been exercised to prevent overfitting.
A successfully trained RSResNet also gives us a discrete dynamical system that approximates the true governing equation (1). Due to its “adaptive” nature, the intermediate time intervals are variables and not known explicitly. Therefore, the discrete RSResNet needs to be applied times to produce the solution states over the time interval , which is the same interval given by the training data. This is different from the RTResNet, which can produce solutions over a smaller and uniform time interval .
4 Theoretical Properties
In this section we present a few straightforward analysis to demonstrate certain theoretical aspects of the proposed DNN for equation approximation.
4.1 Continuity of Flow Map
Under certain conditions on , one can show that the flow map of the dynamical system (1) is locally Lipschitz continuous.
Assume is Lipschitz continuous with Lipschitz constant on a set . For any , define
Then, for any , the flow map is Lipschitz continuous on . Specifically, for any ,
(21) 
4.2 Compositions of Flow Maps
It was shown in [3] that any smooth biLipschitz function can be represented as compositions of functions, each of which is nearidentity in Lipschitz seminorm. For the flow map of the autonomous system (1), we can prove a stronger result by using the following property
(22) 
For any positive integer , the flow map can be expressed as a fold composition of , namely,
(23) 
where , and satisfies
(24) 
Suppose that is bounded on , then
(25) 
where is the identity map, and .
The representation (23) is a direct consequence of the property (22). For any , we have
where . Since is a convex function, it satisfies the Jensen’s inequality
Thus we obtain
which implies (24). For any , we have for . Hence
This yields (25), and the proof is complete.
This estimate can serve as a theoretical justification of the ResNet method (
) and RTResNet method (). As long as is reasonably small, the flow map of the underlying dynamical system is close to identity. Therefore, it is natural to use ResNet, which explicitly introduces the identity operator, to approximate the “residue” of the flow map. The norm of the DNN operator , which approximates the residual flow map, , becomes small at . For RTResNet with , its norm becomes even smaller at . We remark that it was pointed out empirically in [9] that using multiple ResNet blocks can result in networks with smaller norm.4.3 Error Bound
Let denote the neural network approximation operator to the lag flow map . For the proposed ResNet (9), RTResNet (15), and RSResNet (18), the operators can be written as
(26) 
We now derive a general error bound for the solution approximation using the DNN operator . This bound serves a general guideline for the error growth. More specific error bounds for each different network structure are more involved and will be pursued in a future work.
Let denote the solution of the approximate model at time . Let denote the error, .
Assume that the same assumptions in Lemma 4.1 hold, and let us further assume

,

for ,
then we have
(27) 
The triangle inequality implies that
where the Lipschitz continuity of the flow map, shown in (21), has been used in the last inequality. Recursively using the above estimate gives
The proof is complete.
5 Numerical Examples
In this section we present numerical examples to verify the properties of the proposed methods. In all the examples, we generate the training data pairs in the following way:

Generate points from uniform distribution over a computational domain . The domain is a region in which we are interested in the solution behavior. It is typically chosen to be a hypercube prior to the computation.

For each , starting from , we march forward for a time lag the underlying governing equation, using a highly accurate standard ODE solver, to generate . In our examples we set .
We remark that the time lag is relatively coarse and prevents accurate estimate of time derivatives via numerical differentiation. Since our proposed methods employ the integral form of the underlying equation, this difficulty is circumvented. The random sampling of the solution trajectories of length follows from the work of [52], where it was established that such kind of dense sampling of short trajectories is highly effective for equation recovery.
All of our network models, ResNet, RTResNet, and RSResNet, are trained via the loss function (8
) and by using the opensource Tensorflow library
[1]. The training data set is divided into minibatches of size . And we typically train the model for epochs and reshuffle the training data in each epoch. All the weights are initialized randomly from Gaussian distributions and all the biases are initialized to be zeros.After training the network models satisfactorily, using the data of time lag, we march the trained network models further forward in time and compare the results against the reference states, which are produced by highorder numerical solvers of the true underlying governing equations. We march the trained network systems up to to examine their (relatively) longterm behaviors. For the two linear examples, we set ; and for the two nonlinear examples, we set .
5.1 Linear ODEs
We first study two linear ODE systems, as textbook examples. In both examples, our onestep ResNet method has 3 hidden layers, each of which has 30 neurons. For the multistep RTResNet and RSResNet methods, they both have ResNet blocks (), each of which contains 3 hidden layers with 20 neurons in each layer.
Example 1
We first consider the following twodimensional linear ODE with
(28) 
The computational domain is taken to be .
Upon training the three network models satisfactorily, using the data pairs, we march the trained models further in time up to . The results are shown in Figure 4. We observe that all three network models produce accurate prediction results for time up to .
As discussed in Section 3.2, the multistep RTResNet method is able to produce an approximation over a smaller time step , which in this case is (with ). The trained RTResNet model then allows us to produce predictions over the finer time step . In Figure 5, we show the time marching of the trained RTResNet model for up to using the smaller time step . The results again agree very well with the reference solution. This demonstrates the capability of RTResNet – it allows us to produce accurate predictions with a resolution higher than that of the given data, i.e., . On the other hand, our numerical tests also reveal that the training of RTResNet with becomes more involving – more training data are typically required and convergence can be slower, compared to the training of the onestep ResNet method. Similar behavior is also observed in multistep RSResNet method with . The development of efficient training procedures for multistep RTResNet and RSResNet methods is necessary and will be pursued in a future work.
Example 2
We now consider another linear ODE system:
(29) 
The numerical results for the three trained network models are presented in Figure 6. Again, we show the prediction results of the trained models for up to . While all predictions agree well with the reference solution, one can visually see that the RSResNet model is more accurate than the RTResNet model, which in turn is more accurate than the onestep ResNet model. This is expected, as the multistep methods should be more accurate than the onestep method (ResNet), and the RSResNet should be even more accurate due to its adaptive nature.
5.2 Nonlinear ODEs
We now consider two nonlinear problems. The first one is the well known damped pendulum problem, and the second one is an nonlinear differentialalgebraic equation (DAE) for modelling a generic toggle ([17]). In both examples, our onestep ResNet model has 2 hidden layers, each of which has 40 neurons. Our multistep RTResNet and RSResNet models both have 3 of the same ResNet blocks (). Again, our training data are collected over time lag. We produce predictions of the trained model over time for up to and compare the results against the reference solutions.
Example 3: Damped pendulum
The first nonlinear example we are considering is the following damped pendulum problem,
where and . The computational domain is . In Figure 7, we present the prediction results by the three network models, starting from the initial condition and for time up to . We observe excellent agreements between the network models and the reference solution.
Example 4: Genetic toggle switch
We now considere a system of nonlinear differentialalgebraic equations (DAE), which are used to model a genetic toggle switch in Escherichia coli ([17]). It is composed of two repressors and two constitutive promoters, where each promoter is inhibited by the represssor that is transcribed by the opposing promoter. Details of experimental measurement can be found in [10]. This system of equations are as follows,
In this system, the components and denote the concentration of the two repressors. The parameters and are the effective rates of the synthesis of the repressors; and represent cooperativity of repression of the two promoters, respectively; [IPTG] is the concentration of IPTG, the chemical compound that induces the switch; and is the dissociation constant of IPTG.
In the following numerical experiment, we take , , , , and . We consider the computational domain .
In Figure 8 we present the prediction results generated by the ResNet, the RTResNet and the RSResNet, for time up to . The initial condition is . Again, all these three models produce accurate approximations, even for such a longtime simulation.
6 Conclusion
We presented several deep neural network (DNN) structures for approximating unknown dynamical systems using trajectory data. The DNN structures are based on residual network (ResNet), which is a onestep method exact time integrator. Two multistep variations were presented. One is recurrent ResNet (RTResNet) and the other one is recursive ResNet (RSResNet). Upon successful training, the methods produce discrete dynamical systems that approximate the underlying unknown governing equations. All methods are based on integral form of the underlying system. Consequently, their constructions do not require time derivatives of the trajectory data and can work with coarsely distributed data as well. We presented the construction details of the methods, their theoretical justifications, and used several examples to demonstrate the effectiveness of the methods.
References

[1]
Mart(́i)n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen,
Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard,
Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh
Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris
Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal
Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas,
Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and
Xiaoqiang Zheng.
TensorFlow: Largescale machine learning on heterogeneous systems, 2015.
Software available from tensorflow.org.  [2] Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993.
 [3] Peter L Bartlett, Steven N Evans, and Philip M Long. Representing smooth functions as compositions of nearidentity functions with implications for deep network optimization. arXiv preprint arXiv:1804.05012, 2018.

[4]
Monica Bianchini and Franco Scarselli.
On the complexity of neural network classifiers: A comparison between shallow and deep architectures.
IEEE transactions on neural networks and learning systems, 25(8):1553–1565, 2014.  [5] J. Bongard and H. Lipson. Automated reverse engineering of nonlinear dynamical systems. Proc. Natl. Acad. Sci. U.S.A., 104(24):9943–9948, 2007.
 [6] S. L. Brunton, B. W. Brunton, J. L. Proctor, Eurika Kaiser, and J. N. Kutz. Chaos as an intermittently forced linear system. Nature Communications, 8, 2017.
 [7] S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. U.S.A., 113(15):3932–3937, 2016.
 [8] S. Chan and A.H. Elsheikh. A machine learning approach for efficient uncertainty quantification using multiscale methods. J. Comput. Phys., 354:494–511, 2018.
 [9] B. Chang, L. Meng, E. Haber, F. Tung, and D. Begert. Multilevel residual networks from dynamical systems view. In International Conference on Learning Representations, 2018.
 [10] R. Chartrand. Numerical differentiation of noisy, nonsmooth data. ISRN Applied Mathematics, 2011, 2011.
 [11] Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.
 [12] B. C. Daniels and I. Nemenman. Automated adaptive inference of phenomenological dynamical models. Nature Communications, 6, 2015.
 [13] B. C. Daniels and I. Nemenman. Efficient inference of parsimonious phenomenological models of cellular dynamics using Ssystems and alternating regression. PloS One, 10(3):e0119821, 2015.
 [14] KeLin Du and M.N.S. Swamy. Neural networks and statistical learning. SpringerVerlag, 2014.
 [15] Weinan E, Bjorn Engquist, and Zhongyi Huang. Heterogeneous multiscale method: A general methodology for multiscale modeling. Phys. Rev. B, 67:092101, 2003.
 [16] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In Conference on Learning Theory, pages 907–940, 2016.
 [17] T. S. Gardner, C. R. Cantor, and J. J. Collins. Construction of a genetic toggle switch in escherichia coli. Nature, 403(6767):339, 2000.
 [18] D. Giannakis and A. J. Majda. Nonlinear Laplacian spectral analysis for time series with intermittency and lowfrequency variability. Proc. Natl. Acad. Sci. U.S.A., 109(7):2222–2227, 2012.
 [19] R. GonzalezGarcia, R. RicoMartinez, and I. G. Kevrekidis. Identification of distributed parameter systems: A neural net based approach. Comput. Chem. Eng., 22:S965–S968, 1998.
 [20] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT Press, 2016.

[21]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 770–778, 2016.  [22] J.S. Hesthaven and S. Ubbiali. Nonintrusive reduced order modeling of nonlinear problems using neural networks. J. Comput. Phys., 363:55–78, 2018.
 [23] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257, 1991.
 [24] I. G. Kevrekidis, C. W. Gear, J. M. Hyman, P. G. Kevrekidid, O. Runborg, C. Theodoropoulos, et al. Equationfree, coarsegrained multiscale computation: Enabling mocroscopic simulators to perform systemlevel analysis. Commun. Math. Sci., 1(4):715–762, 2003.
 [25] Yuehaw Khoo, Jianfeng Lu, and Lexing Ying. Solving parametric pde problems with artificial neural networks. arXiv preprint arXiv:1707.03351, 2018.
 [26] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861–867, 1993.
 [27] Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. PDENet: learning PDEs from data. arXiv preprint arXiv:1710.09668, 2017.
 [28] N. M. Mangan, J. N. Kutz, S. L. Brunton, and J. L. Proctor. Model selection for dynamical systems via sparse regression and information criteria. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 473(2204), 2017.
 [29] A. Mardt, L. Pasquali, H. Wu, and F. Noe. VAMPnets for deep learning of molecular kinetics. Nature Comm., 9:5, 2018.
 [30] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014.
 [31] Allan Pinkus. Approximation theory of the MLP model in neural networks. Acta Numerica, 8:143–195, 1999.

[32]
Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli
Liao.
Why and when can deepbut not shallownetworks avoid the curse of dimensionality: A review.
International Journal of Automation and Computing, 14(5):503–519, 2017.  [33] Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. arXiv preprint arXiv:1801.06637, 2018.
 [34] Maziar Raissi and George Em Karniadakis. Hidden physics models: Machine learning of nonlinear partial differential equations. J. Comput. Phys., 357:125–141, 2018.
 [35] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Machine learning of linear differential equations using gaussian processes. Journal of Computational Physics, 348:683–693, 2017.
 [36] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Multistep neural networks for datadriven discovery of nonlinear dynamical systems. arXiv preprint arXiv:1801.01236, 2018.
 [37] D. Ray and J.S. Hesthaven. An artificial neural network as a troubledcell indicator. J. Comput. Phys., 367:166–191, 2018.
 [38] S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz. Datadriven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017.
 [39] Samuel H Rudy, J Nathan Kutz, and Steven L Brunton. Deep learning of dynamics and signalnoise decomposition with timestepping constraints. arXiv preprint arXiv:1808.02578, 2018.
 [40] H. Schaeffer and S. G. McCalla. Sparse model selection via integral terms. Phys. Rev. E, 96(2):023302, 2017.
 [41] Hayden Schaeffer. Learning partial differential equations via data discovery and sparse optimization. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 473(2197), 2017.
 [42] Juergen Schmidhuber. Deep learning in neural networks: an overview. Neural Networks, 61:85–117, 2015.
 [43] M. Schmidt and H. Lipson. Distilling freeform natural laws from experimental data. Science, 324(5923):81–85, 2009.
 [44] M. D. Schmidt, R. R. Vallabhajosyula, J. W. Jenkins, J. E. Hood, A. S. Soni, J. P. Wikswo, and H. Lipson. Automated refinement and inference of analytical models for metabolic networks. Physical Biology, 8(5):055011, 2011.
 [45] Andrew Stuart and Anthony R Humphries. Dynamical Systems and Numerical Analysis, volume 2. Cambridge University Press, 1998.
 [46] G. Sugihara, R. May, H. Ye, C. Hsieh, E. Deyle, M. Fogarty, and S. Munch. Detecting causality in complex ecosystems. Science, 338(6106):496–500, 2012.
 [47] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
 [48] G. Tran and R. Ward. Exact recovery of chaotic systems from highly corrupted data. Multiscale Model. Simul., 15(3):1108–1129, 2017.
 [49] R.K. Tripathy and I. Bilionis. Deep UQ: learning deep neural network surrogate model for high dimensional uncertainty quantification. J. Comput. Phys., 375:565–588, 2018.
 [50] H. U. Voss, P. Kolodner, M. Abel, and J. Kurths. Amplitude equations from spatiotemporal binaryfluid convection data. Phys. Rev. Lett., 83(17):3422, 1999.
 [51] Yating Wang, Siu Wun Cheung, Eric T. Chung, Yalchin Efendiev, and Min Wang. Deep multiscale model learning. arXiv preprint arXiv:1806.04830, 2018.
 [52] Kailiang Wu and Dongbin Xiu. Numerical aspects for approximating governing equations using data. arXiv preprint arXiv:1809.09170, 2018.
 [53] H. Ye, R. J. Beamish, S. M. Glaser, S. C. H. Grant, C. Hsieh, L. J. Richards, J. T. Schnute, and G. Sugihara. Equationfree mechanistic ecosystem forecasting using empirical dynamic modeling. Proc. Natl. Acad. Sci. U.S.A., 112(13):E1569–E1576, 2015.
 [54] Y. Zhu and N. Zabaras. Bayesian deep convolutional encoderdecoder networks for surrogate modeling and uncertainty quantification. J. Comput. Phys., 366:415–447, 2018.
Comments
There are no comments yet.