1 Introduction and Related Works
Traditional methods for solving systems of differential equations imply a numerical stepbystep integration of the system. For some problems such as stiff ODEs, this integration procedure leads to timeconsuming algorithms because of the limitations on the time step that is used to achieve the necessary accuracy of the solution. From this perspective, neural network as a universal function approximator can be applied for the construction of the solution in a more efficient way. This property induces wide research and consequently large number of papers devoted to the neural networks design for ODEs and PDEs solutions. We go into some of the publications in more detail. Further, we give a short review and comparison of existing approaches as well as highlight the advantages of proposed in this paper methodology.
In [16]
, the method to solve initial and boundary value problems using feedforward neural networks is proposed. The solution of the differential equation is written as a sum of two parts. The first part satisfies the initial/boundary conditions. The second part corresponds to a neural network output. The same technique is applied for solving Stokes problem in
[7, 10, 27].In [28]
, the neural network is trained to satisfy the differential operator, initial condition, and boundary conditions for the partial differential equation (PDE). The authors in
[30]translate a PDE to a stochastic control problem and use deep reinforcement learning for an approximation of derivative of the solution with respect to the space coordinate.
Other approaches rely on the implementation of a traditional stepbystep integrating method in a neural network framework, cf. [2, 29]. In [29], the author proposes such an architecture. After fitting, the neural network (NN) produces an optimal finite difference scheme for a specific system. The backpropagation technique through an ordinary differential equation (ODE) solver is proposed in [8]. The authors construct a certain type of neural network that is analogous to a discretized differential equation. This group of methods requires a traditional numerical method to simulate dynamics.
Polynomial neural networks are also widely presented in the literature, cf. [32, 31, 24]. In [32], the polynomial architecture that approximates differential equations is proposed. The Legendre polynomial is chosen as a basis in [31]. But it should be noted, that in all these paper, the polynomial architectures are used as black box models, and the authors do not indicate its connection to the ODEs. Thus, the advantages of NN can only be partially exploited.
In all the described approaches, NN are trained to consider the initial conditions of the differential equations. This means that a NN has to be trained each time when the initial conditions are changed. The abovedescribed techniques are applicable to the general form of differential equations but are able to provide only a particular solution of the system, that is a strict limitation of applicability.
Several types of Convolutional NN and Recurment NN have been developed in the last three years, cf. [9]. These deep networks corresponds to associated ODE discretization schemes. There is some benefit using these special network architecture, but still no clear explanation of it.
In this paper, we consider nonlinear systems of ODEs with polynomial righthand side,
(1) 
where is an independent variable,
is a state vector, and
means th Kronecker power of vector . For example, for we have , , after reduction of the same terms.Such nonlinear systems arise in different fields such as automated control, robotics, mechanical and biological systems, chemical reactions, drug development, molecular dynamics, and so on. Moreover, often it is possible to transform a nonlinear equation (either ODE or PDE) to a polynomial form.
Based on the Taylor mapping technique, it is possible to build a polynomial neural network (PNN) that approximates the general solution of (1). The weights of the PNN should be calculated at once directly from the system of ODEs. By a Taylor map we mean transformation in form of
(2) 
where , and matrices are weights. The ransformation (2) is linear in weights and nonlinear with respect to the .
In literature, this map can be referred to as Taylor maps and models [19]
, tensor decomposition
[11], matrix Lie transform [3], exponential machines [21], and others. In fact, the transformation (2) is just a polynomial regression with respect to the components of .Though many numerical solvers for (1) can be considered as maps, they commonly based on small time steps and weight matrices that depend on . In this paper, by mapping approach we mean transformation (2) with the weight matrices
that are estimated for a large enough time interval
and do not depend on . The greatest advantage of the mapping approach is computational performance. Instead of stepbystep integrating with numerical solvers, one can apply a map (2) that estimates dynamics in a large time interval for different initial conditions at the same accuracy.In the paper, we briefly consider an algorithm for calculating weight matrices in (2) for an arbitrary time step . Considering map (2
) as a neuron (Fig.
1), it is possible to design a PNN that simulates the dynamics of the given differential equation. While the PNN is connected to differential equations, it is also possible to use this architecture for datadriven identification of physical systems.The rest of the paper is organised as follows. Sec. 2 describes the general approach for building Taylor maps for the systems of ODEs. Sec. 3 corresponds to the PNNs weights calculating for dynamics simulation of applied physical systems. Sec. 4 is devoted to training PNN. The regularization method along with the examples of datadriven dynamics learning without equation consideration are described.
2 Taylor maps for the system of ODEs
The solution of (1) with the initial condition during the time interval in its convergence region can be presented in the power series, cf. [4, 3],
(3) 
Theoretical estimations of accuracy and convergence of the truncated series in solving of ODEs can be found in [6]. In [5], it is shown how to calculate matrices by introducing new matrices . The main idea is replacing (1) by the equation
(4) 
The last equation does not depend on and should be solved at once with initial condition , where
is the identity matrix. The truncated solution of (
4) for desirable time interval yields Taylor map (2) with .One of the advantages of the described approach is simulation performance. Indeed, instead of stepbystep integrating of the equation (1) with a small time step every time when the initial condition is changed, one should integrate map at once for the unique initial condition and the whole desirable time interval . Then the same map can be applied for different initial conditions.
3 Simulation of dynamical systems
Since the Taylor mapping approach is commonly used in accelerator physics [14, 25], we demonstrate the proposed method with the simplified example of charged particle motion. We also introduce deep PNN for simulation and control of charged particle beam dynamics, and discuss a shallow PNN architecture for simulation of a stiff ODE.
3.1 Charged particle dynamics
The particle dynamics in the electromagnetic fields can be described by a system of ODEs that has a complex nonlinear form. For simplicity, let’s consider an approximation [26] of particle motion in cylindrical deflector written in form of (1)
(5) 
where is equilibrium radius of particle bending, is deviation from this radius, and is derivative on bending angle.
For an example, let’s consider a deflector with that rotates a reference particle with initial conditions on angle . For simulation, we investigate dynamics of particle with nontrivial initial conditions that lead to particle oscillation.
Traditional approach to solve system (5) is stepbystep integrating with numerical solvers. For this purpose, we use Runge–Kutta method of 4th order with fixed time step. To control the numerical error in this example, we use integrating angle in 30 times smaller than bending angle. Bigger steps introduce nonphysical dissipation in particle motion.
In stepbystep integrating, to track particle inside the deflector, one has to do 30 steps. In contrast to this, mapping technique allows to calculate output state in single step (see Fig. 2).
Let’s fine a 3rd order Taylor map (2) that transform initial particle state at the entrance of the deflector to the resulting state at the end of it,
(6) 
Combining this map with (5), one can write
(7) 
Grouping the like terms of powers of and in the last system, one can obtain a system of ODEs that do not depends on and represents dynamics of weight matrices
(8) 
where is functions arising after like terms grouping. By integrating this system during the interval , we can receive the desired map. For example, the map up to two digits is
Using this polynomial transformation, one can calculate state of the particle at the end of the deflector for arbitrary initial conditions . The simulation in this case is 160 times faster than Runge–Kutta based simulation.
For longterm charged particle dynamics investigation, the traditional stepbystep numerical methods are not suitable because of the performance limitation. Instead of solving differential equations directly, one can estimate Taylor maps of high orders of nonlinearity for each control element in an accelerator. By combining such maps consequently, one can obtain a deep polynomial neural network that represents the whole accelerator lattice (see Fig. 3). In such PNN each layer is represented by a Taylor map and corresponds to the physical element in charge particle accelerator.
The described approach was used for simulation of nonlinear spinorbit dynamics in EDM (electric dipole moment) search project
[25, 14]. The dynamics of particles was described by ninedimensional state vector. The deep PNN architecture had more then 100 polynomial layers that represent lattice of the accelerator. The computational performance was increased in 1500 times in comparison with traditional stepbystep integrating.Along with the computational performance, another advantage of the proposed model is uncertainties accounting. It is easy to represent misalignments and field errors in physical accelerator by introducing additional polynomial layers inside the PNN. This feature may be essential for control of accelerators, while the PNN architecture provides a possibility to fit layers with measured data.
3.2 The RayleightPlesset equation
The RayleightPlesset equation governs the dynamics of a spherical gas bubble in an infinite body of incompressible liquid. It is derived using the conservation of mass and momentum which leads to a highly nonlinear second order ODE for the bubble radius . The pressure inside the bubble is denoted by and the pressure outside far a way from the bubble is denoted by , is the density of the surrounding liquid, assumed to be constant. The equation of motion has the form
(9) 
Viscous terms as well as surface tension are neglected. Provided that is known and is given, the Rayleigh–Plesset equation can be used to solve for the timevarying gas bubble radius which includes collapses and rebounds.
Because we study the numerical solvers for the RayleighPlesset equation, we assume that the pressure difference is constant, i.e. and for simplicity. The solution contains very strong gradients during the collapse phase and the following rebound phase, i.e it is a stiff ODE. In order to guarantee the accuracy of the numerical solution, very small time steps as well as a high order of the solver must be used. The RayleighPlesset equation (9) is reformulated in a system of first order ODE’s with polynomial righthand side
(10) 
where , , , and the initial conditions are , , and .
Since the equation is a stiff ODE, we use an adaptive mapping approach. This means that we built Taylor maps of 7th order for different time intervals ranging from to . Then we apply maps based on the relative error during the simulation. For this approach, the polynomial neurons can be organized in a shallow architecture that provides possibility to simulate of a bubble motion with control of accuracy. To compare the computational performance we run simulation for three initial conditions up to the collapsing time. The adaptive mapping approach is 2.5 times faster then the solver for stiff differential equations ode45 from [20] with similar accuracy (see Fig. 4, 5).
3.3 The Burgers’ equation
Burgers’ equation is a fundamental partial differential equation that occurs in various areas, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. This equation is also often used as a benchmark for numerical methods. In [13], a feedforward neural network is trained to satisfy Burgers’ equation and certain initial conditions, but the computational performance of the approach is not estimated. In this section, we demonstrate how to build a PNN that solves Burgers’ equation and does not require training to satisfy different initial conditions.
Burgers’ equation has a form
(11) 
Following [1] for benchmarking, we use an analytic solution
and a traditional numerical method
(12) 
where stands for the time step, and stands for the grid node.
The equation (12) presents a finite difference method (FDM) that consists of an Euler explicit time discretization scheme for the temporal derivatives, an upwind firstorder scheme for the nonlinear term, and finally a centered secondorder scheme for the diffusion term. The time step for benchmarking is fixed to with the uniform spacing of , . Thus, for the numerical solution for times from to on , the method requires the mesh with 1000 steps on space coordinate and 2000 time steps.
It is indicated in [1] that the FDM introduces a dispersion error in the solution (see Fig. 6). Such error can be reduced by increasing the mesh resolution, but then the time step should be decreased to respect the stability constraints of the numerical scheme.
To build a PNN for solving the Burger’s equation we translate the eq. (11) to a system of ODEs and build a Taylor map in accordance with Sec. 2. Assuming that the righthand side of the eq. (11) can be approximated by a function and considering this approximated equation as a hyperbolic one, it is possible to derive the system of ODEs
(13) 
where , , and is vector of discrete stamps on space. This transformation from PDE to ODE is well known and can be derived using the method of characteristics and direct method [12]. If is the same discretization as in (12), then the equation (13) leads to the system of 2000 ODEs
which can be easily expanded to the power series with respect to the and up to the necessary order of nonlinearity.
Since there are only first and second order approximations in the benchmarking FDM scheme, we built a Taylor map of only the first order in a time interval . This time step is five times larger than that used in the benchmarks. The numerical solution provided by the resulting PNN is presented in Fig. 6, and the accuracy and performance are compared in Tab. 1.
Method  Time  Elapsed  MSE for 

step  time  
FDM  
PNN 
The PNN numerically estimates dynamics in a larger time interval and provides better accuracy with less computational time in comparison with FDM of the same order of derivative approximations. If the FDM scheme is adjusted to a higher accuracy, the computational time will be increased even more. Accuracy is calculated as the MSE metric between the numerical solution and its analytic form at final time .
Since the derived from the differential equation PNN is an approximation of the general solution, it can be used for simulation of the dynamics with different initial conditions without weight modification. For example, for another analytic solution
the same PNN provides a numerical solution in time with MSE error , while FDM yiels . The elapsed times are the same since the mesh sizes were not changed.
4 Learning dynamical systems from data
In this section, we discuss training of the PNN. We demonstrate how one can adjust weights of the PNN for certain initial conditions of the Burgers’ equation by additional training of the PNN derived from the equation. We also introduce a regularization of the PNN based on combinatorial binary optimization and demonstrate the proposed methods in a practical application of a datadriven system learning.
4.1 Training PNN for certain initial conditions
The PNN derived from differential equation works for any initial condition with the same level of accuracy. The accuracy depends on the size of the PNN, namely the order of nonlinearities in the map (2
). But it is still possible to train PNN for certain initial conditions. This fitting increases accuracy only for this particular solution, but probably decrease accuracy for others.
In order to train PNN for the Burgers equations, we consider deep architecture with 400 layers with shared weights. Each layer M is defined by Taylor map (2) for time :
The loss function is defined as the inconsistency of the numerical solution to differential equations. The initial values of weights are calculated from the PDE with corresponding Taylor map.
Initially, the PNN has weights with only of them that are not equal to zero. The point of training is to adjust these weights to achieve better accuracy for the given initial conditions. We implemented a simplified training based on coordinate descent method that allows to increase accuracy in 3 times, but in the same time requires lots of computational time.
4.2 Regularization of the PNN
It is common for physical systems, that map is presented by sparse matrices . So it is important to identify nonzero weight in the polynomial neural network. Classical and regularization terms do not work in this case since they tend to decrease weight. Weights in PNN can be large by absolute value but the total amount of nonzero weights should be as small as possible. To solve this issue, one can introduce a binary mask that is applied for weight matrices,
(14) 
where means elementwise multiplication.
During the fitting of weight matrices , it is necessary also to find an optimal mask . If the weights
are fixed after each training epoch, the map (
14) becomes linear for the components of .It is possible to formulate regularization as a highorder binary optimization problem and translate it into a quadratic binary minimization problem (QUBO) that can be run on Quantum Annealers. For example,the loss function
translates directly to QUBO, while loss function that introduced for Burgers equation translates to highorder unconstrained binary optimization problem. By introducing additional binary variables it is possible finally to formulate regularization as QUBO.
To demonstrate the advantage of QUBO–based regularization, we implemented a simplified example with the Van der Pole oscillator. The equation is widely used in the physical sciences and engineering and can be used for the description of the pneumatic hammer, steam engine, periodic occurrence of epidemics, economic crises, depressions, and heartbeat. The equation has wellstudied dynamics and is widely used for testing of numerical methods, e.g. [23].
The Van der Pol oscillator is defined as the system of ODEs that can be presented in the form of
(15)  
We generate a training data as a particular solution of the system (15) with the initial condition . After this solution was generated, the equation is not used further.
Having this training data set (Fig. 9 ), the proposed neural network can be fitted with the mean squared error (MSE) as a loss function based on the norm
We implemented the abovedescribed technique in Keras/TensorFlow and fitted a thirdorder Lie transform–based neural network with an Adamax optimizer. After each training epoch we applied QUBO–based regularization for the weights of the PNN.
Fig. 8 shows that QUBO–based regularization can significantly decrease the number of epochs required to achieve an appropriate level of accuracy. However, the time required to solve QUBO problem is longer then training without regularization. It should be noted that using of special hardware (e.g. quantum or digital annealers) instead of classical QUBOsolvers can potentially provide additional improvement in calculation time.
The generalization property of the network can be investigated by examining prediction not only from the training data set but also for new initial conditions. Fig. 10 demonstrates predictions that are calculated starting also at the new points , and that were not presented to the PNN during fitting. For the prediction starting from the training initial condition, the mean relative error of the predictions is . For the new initial conditions, the mean error is even less and equals to .
4.3 Datadriven identification of a zincair energy storage system
Zincair batteries are one of the most promising energy storage systems. An effective mathematical model is a key part of a battery management system, responsible for its operation control and internal state evaluation. The modelbased analysis is an effective tool in the design and manufacture of power supplies, as well as in the analysis of their behavior under different operating conditions, cf. [17].
There are several approaches for building dynamic model of a battery. By now the researches were mostly focused on the zincair batteries mathematical models derivation based on electrochemical principles of its functioning. This implies utilization of complicated nonlinear PDEs, describing the battery behavior precisely, but requiring timeconsuming numerical procedures for their solution.
Another type is a “black box” model or datadriven model that is formed by learning on experimental data using artificial neural networks. Since this approach does not imply the presence of state equations in the model, it works only on similar data to what the NN was trained on. This requires availability of operational data from real environment of sufficient volume and quality, which in practice cannot always be guaranteed.
Thus, state space models with lumped parameters are more suitable for analysis, control, and optimization of power supplies, since in this case, it includes battery general characteristics, like voltage, current, state of charge and so on. In the most general form without specification of any details, it can be presented as
(16) 
It should be noted that real power sources exhibits strongly nonlinear effects, that are to be included as a nonlinear part of and functions of mathematical model (16) from [22]. This implies manual derivation of the equations for dynamics description and identification of its parameters. In this work we take an alternative approach by datadriven construction of mathematical model from a set of controlled experiments [18]. We investigate whether it is possible to extract relevant features from current and voltage measurements collected during battery discharge in various fixed loading conditions.
For demonstration of this approach, we consider only voltage dynamics for discharge currents , , and and solve an identification problem only for regions where voltage decreases in time.
Initial data taken from the open source
[18] were filtered with simple moving average window and aligned to constant time stamp of 1ms. To present dynamics of the voltage decreasing, we use polynomial neural network with three layers (see Fig. 12). Each layer is represented by a Taylor map, namely and are Taylor maps of second order, is a Taylor map of fifth order.To train the PNN, vector of voltage at the given time stamp and the value of discharge current is used as input data, and voltage after three time stamps is used as output data. Also, the described above regularization were used.
After training, the dynamics of the voltage decreasing is calculated as consequent prediction performed by PNN. Starting with initial voltage at time we calculate dynamics by formula
In this way, PNN plays role of the model of the dynamical system that can calculate dynamics with different initial conditions .
Fig. 11 shows the results of the voltage dynamics simulation with the trained PNN. While additional features like derivatives of voltage or power can potentially increase the accuracy, the built PNN preserves physical properties of the dynamics. For example, the PNN successfully predicts the lower level of voltage that is the same for all currents in contrast to simple function approximation, that can give aberrant negative values of voltage, cf. [22].
One of the advantage of the polynomial architectures in comparison with other machine learning methods is its coincidence with theory of dynamical systems and differential equations. A PNN trained with data can provide mathematical model with physical properties preservation. For example, in
[15] it is shown, that MLP and LSTM neural networks just memorized data even for simplified physical systems. They can not predict dynamics for data that have not been presented during fitting. PNN architecture allows to extrapolate dynamics on new initial condition and preserve physical properties of the dynamical systems.5 Supplementary code
The program code that contains the considered examples can be found in the following GitHub repositories.
github.com/andiva/DeepLieNet: implementation of PNN in Keras/TensorFlow (sections 1, 3.3, 3.2, and 4.3):
/demo/Deflector: cylindrical deflector simulation
/demo/accelerator.ipynb: storage ring simulation
/demo/Ray_Ples: simulation of the RayleightPlesset equation
/demo/Battery: trained PNN for voltage dynamics
6 Conclusion
Since the Taylor maps and polynomial neural network have strong connection to systems of differential equations, these methods can be used for physical system simulation and datadriven identification. If the differential equations for the system are known, it is possible to calculate weights of the PNN with necessary level of accuracy. In this way, the built PNN can be used for simulation of the dynamics instead of the traditional numerical solvers. It is shown, that Taylor mapping and PNN provides the same or even better accuracy with less computational time in comparison to stepbystep integrating of the equations.
If the equations of the system are not known, the PNN can be trained by scratch with the available data. One can define an architecture of the PNN and fit weights directly with data. Instead of fitting of a parametrized equations, this approach allows to fit weights of the PNN directly.
In the article, we consider simulation of dynamical system arising in practical application with the PNN. For training PNN, we introduces QUBO–based regularization and demonstrated datadriven dynamics learning with both simplified Van der Pol oscillator and applied problem of the batteries development.
We do not consideed questions of accuracy, PNN architecture selection, and optimization methods for training in details. This work should be done in further research. With the provided examples, we rather demonstrated the applicability of Taylor mapping methods that are commonly used in dynamical systems investigation to the theory of neural network and datadriven system learning.
We would like to thank Prof. Dr. Sergei Andrianov for his help with explanation of Taylor mapping techniques for solving of system of differential equations.
References
 [1] ‘Airbus quantum computing challenge’, https://www.airbus.com/innovation/techchallengesandcompetitions/airbusquantumcomputingchallenge.html, (2019).
 [2] A. Anastassi, ‘Constructing rungekutta methods with the use of artificial neural networks’, Tech. rep. https://arxiv.org/pdf/1106.1194.pdf, (2013).
 [3] S. Andrianov, ‘A matrix representation of the lie transformation’, Proceedings of the Abstracts of the International Congress on Computer Systems and Applied Mathematics, 14, (1993).
 [4] S. Andrianov, ‘Symbolic computation of approximate symmetries for ordinary differential equations’, Mathematics and Computers in Simulation, 57(35), 147–154, (2001).
 [5] S. Andrianov, ‘A role of symbolic computations in beam physics’, Computer Algebra in Sc, Comp., Lecture Notes in Computer Science, 6244, 19–30, (2010).
 [6] S. Andrianov, ‘The convergence and accuracy of the matrix formalism approximation’, Proceedings of ICAP2012, Rostock, Germany, 93–95, (2012).
 [7] M. Baymani, A. Kerayechian, and S. Effati, ‘Artificial neural networks approach for solving stokes problem’, Applied Mathematics, 1, 288–292, (2010).
 [8] R. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, ‘Neural ordinary differential equation’, Tech. rep. https://arxiv.org/pdf/1806.07366.pdf, (2019).
 [9] Xinshi Chen, ‘Ordinary differential equations for deep learning’, arXiv preprint arXiv:1911.00502, (2019).
 [10] M. Chiaramonte and M. Kiener, ‘Solving differential equations using neural networks’, cs229.stanford.edu/proj2013/ChiaramonteKienerSolvingDifferentialEquationsUsingNeuralNetworks.pdf, (2013).
 [11] S. Dolgov, ‘A tensor decomposition algorithm for large odes with conservation laws’, Tech. report. https://arxiv.org/pdf/1403.8085.pdf, (2014).
 [12] L.C. Evans, ‘Partial differential equations’, Providence, R.I.: American Mathematical Society, (2010).
 [13] M. Hayati and B. Karami, ‘Feedforward neural network for solving partial differential equations’, Journal of Applied Sciences, 7(19), 2812–2817, (2007).
 [14] A. Ivanov, S. Andrianov, and Y. Senichev, ‘Simulation of spinorbit dynamics in storage rings’, Journal of Physics: Conference Series, 747(1), (2016).
 [15] A. Ivanov, A. Sholokhova, S. Andrianov, and R. KonoplevEsgenburg, ‘Lie transform–based neural networks for dynamics simulation and learning’, Tech. report. https://arxiv.org/abs/1802.01353, https://arxiv.org/pdf/1802.01353v1.pdf, (2018/2019).
 [16] I.E. Lagaris, A. Likas, and D.I. Fotiadis, ‘Artificial neural networks for solving ordinary and partial differential equations’, Tech. rep. https://arxiv.org/pdf/physics/ 9705023.pdf, (1997).
 [17] W. LaoAtiman, K. Bumroongsil, A. Arpornwichanop, P. Bumroongsakulsawat, S. Olaru, and S. Kheawhom, ‘Modelbased analysis of an integrated zincair flow battery/zinc electrolyzer system’, Frontiers in Energy Research, 7(FEB), (2019).
 [18] W. LaoAtiman, S. Olaru, A. Arpornwichanop, and S. Kheawhom, ‘Discharge performance and dynamic behavior of refuellable zincair battery’, Scientific data, 6(1), 168, (2019).
 [19] Berz M., ‘From taylor series to taylor models’, https://bt.pa.msu.edu/pub/papers/taylorsb/taylorsb.pdf, (1997).
 [20] MATLAB. version 7.12.0 (r2012a), 2018.
 [21] A. Novikov, M. Trofimov, and I. Oseledets, ‘Exponential machines’, Tech. report. https://arxiv.org/abs/1605.03795, (2017).
 [22] S. Olaru, A. Golovkina, W. Laoatiman, and S. Kheawhom, ‘A mathematical model for dynamic operation of zincair battery cells’, in Proc. of 7th IFAC Symposium on Systems Structure and Control (SSSC), Sinaia, Romania, (September 2019).
 [23] S. Pan and K. Duraisamy, ‘Longtime predictive modeling of nonlinear dynamical systems using neural networks’, Hindawi Complexity, 4801012, (2018).
 [24] V. Schetinin, ‘Polynomial neural networkslearnt toclassify eeg signals’, Tech. rep., https://arxiv.org/ftp/cs/papers/0504/0504058.pdf, (1997).
 [25] Y. Senichev, A. Ivanov, A. Lehrach, R. Maier, D. Zyuzin, and S. Andrianov, ‘Spin tune parametric resonance investigation’, Proceedings of the Particle Accelerator Conference, 3020–3022, (2014).
 [26] Y. Senichev and S. Møller, ‘Beam dynamics in electrostatic rings’, (01 2000).
 [27] A. Sharma, ‘Neuralnetdiffeq.jl: A neural network solver for odes’, https://julialang.org/blog/2017/10/gsocNeuralNetDiffEq, (2017).

[28]
J. Sirignano and K. Spiliopoulos, ‘Dgm: A deep learning algorithm for solving partial differential equations’,
Journal of Computational Physics, (2018).  [29] Y. Wang and C. Lin, ‘Rungekutta neural network for identification of dynamical systems in high accuracy’, IEEE Transactions on Neural Networks, 9(2), 294–307, (1998).
 [30] E. Weinan, J. Han, and A. Jentzen, ‘Deep learningbased numerical methods for highdimensional parabolic partial differential equations and backward stochastic differential equations’, Tech. rep. https://arxiv.org/pdf/1706.04702.pdf, (2017).
 [31] Y. Yang, M. Hou, and J. Luo, ‘A novel improved extreme learning machine algorithm in solving ordinary differential equations by legendre neural network methods’, Advances in Difference Equations, 4(1), 469, (2018).

[32]
L. Zjavka, ‘Differential polynomial neural network’,
Journal of Artificial Intelligence
, 4 (1), 89–99, (2011).
Comments
There are no comments yet.