    # An Effective and Efficient Method to Solve the High-Order and the Non-Linear Ordinary Differential Equations: the Ratio Net

An effective and efficient method that solves the high-order and the non-linear ordinary differential equations is provided. The method is based on the ratio net. By comparing the method with existing methods such as the polynomial based method and the multilayer perceptron network based method, we show that the ratio net gives good results and has higher efficiency.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many problems encountered in the fields of engineering, physics, mathematics, and e.t.c. can be described by differential equations. For example, differential equations are used in the simulation of oscillation phenomena , the social sciences , the treatment of tumors , and e.t.c.. In many cases, problems are expressed in the form of the higher-order and the non-linear ordinary differential equations. For example, a high-order differential equation filter can effectively remove the noise in a signal . However, to analytically solve the ordinary differential equations (ODEs), especially the higher-order and the non-linear ODEs, is difficult. For example, the analytical methods such as the series method [5, 6] and the constant-transform method  sometimes fail at the higher-order and the non-linear cases.

Among many numerical methods, the neural network based methods are prominent and give good results universally to various kinds of ODEs. For example, neural network based methods are used to solve the first-order linear

 and nonlinear ODEs . S. Chakraverty and S. Mall provide a regression-based weight generation algorithm  and a Legendre neural network  to solve high-order ODEs. YunLei Yang et al. prove the effectiveness of using neural networks to solve ODEs ;

In order to construct an effective neural network structure, several methods are provided and these methods can be divided into two categories: (1) the neural network based method with the trial functions based on the multilayer perceptron network (MLP) [12, 13, 14, 15]

, the radial basis function network (RBF)

, and e.t.c.. (2) The polynomial based method with the trial function based on the Legendre polynomials [10, 11], the Chebyshev polynomials [17, 18, 19, 20], and e.t.c..

Beyond the construction of the trial functions, the boundary condition of ODEs brings other difficulties. Usually, researchers consider the boundary conditions as an additional constraint by transforming it into an extra term in the loss function

. In that approach, the parameters of the model are trained to not only fit the shape of the target function but also meet the boundary condition. At cases, the boundary condition prevent the neural network from finding the target function. Another approach introduces a trial function by constructing a network that automatically satisfies the boundary conditions. For example, a trial function in the form , where and are designed to satisfy the boundary conditions and are designed to search the target function .

There are two key issues in solving the ODEs by using the neural network based methods, namely effectiveness and efficiency. (1) Effectiveness is about how to construct an effective neural network structure that is able to find the target function, or the solution of the ODE. (2) Efficiency is about how to accelerate the training process. For the first issue, researchers provide various kinds of neural network based methods and have proved the effectiveness. Beyond the effectiveness, the efficiency need to be considered too. That is, methods that can solves the ODEs efficiently are preferred.

In this paper, an effective and efficient method that solves the high-order and the non-linear ODEs is provided. The method is based on the ratio net, which is proposed in the previous works [22, 23]. Based on the previous work [22, 23], we show that the ratio net is both efficiency and effective in searching the target function. We solve the illustrative examples from Refs. [11, 24, 25, 26, 27]. By comparing the method with existing methods such as the polynomial based method and the MLP based method, we show that the ratio net is both efficient and effective.

This paper is organized as follows: in Sec. 2, we give the description of the method. In Sec. 3, we show the advantage of the method by applying it on various illustrative examples. In Sec. 4, conclusions are given.

## 2 Method description

In this section, we introduce the main method used in the present paper, including the structure of ratio net, the trial function under boundary conditions, the loss function, and the updating algorithm for the weights.

### 2.1 The ratio net: a brief review

The method in the present paper are based on the ratio net . The ratio net is proved to be more efficient than the conventional neural networks such as the MLP and the RBF . In this section, we give a brief review on the ratio net.

A neural network is good at searching the mode of the relation between the outputs and the inputs. To search the mode, the nonlinearity between the outputs and the inputs is the main difficulty. Instead of the nonlinear activation function and the nonlinear kernel function, the ratio net uses fractional forms to solve the difficulty of nonlinearity, see Fig. (

1). In the ratio net, the relation between the outputs and the inputs are

 yRatioi(x)=(∑nj=1w′ijxj+b′i)(∑nj=1w′′ijxj+b′′i)....(∑nj=1w′′′ijxj+b′′′i)(∑nj=1w′′′′ijxj+b′′′′i).... (2.1)

Where , , and e.t.c. are parameters or weights. runs from to the dimension of the input and runs from to the dimension of the output . Here, in order to solve the ODEs, the dimension of input and output are both .

### 2.2 The trial function under the boundary condition

In solving an ODE, the boundary condition brings another difficulty. We find that to construct a network as the trial function that meet the constraint of the boundary condition automatically is important. In this section, based on the ratio net, we construct a trial function that can automatically meet the constraint of the boundary condition.

Here, we consider ODEs in the interval with boundary conditions and , where and are the -th and -th derivative at and respectively. and are real numbers representing the boundary values. For example, a set of boundary conditions are , , , and . The trial function in the present paper is in the form:

 ytrialratio(x)=yratio(x)(x−a)Ma(b−x)Mb+g(x), (2.2)

where and are the maximum order of derivative in the boundary condition at and respectively. is the ratio net giving in Eq. (2.1) with the dimension of input and output both . is a series of highest order with the number of equations in the boundary conditions. meet the boundary conditions and thus is decided by the boundary conditions. For example, if the boundary conditions read and , . Figure 2: An example of the trial function. By adjusting the weights, the trial function is capable of searching the target function that meet the boundary conditions.

### 2.3 Other existing methods: a brief review

In order to show the efficiency of the new method in solving the ODEs, we compare the proposed method with other existing methods as well. Without loss of generality, we consider two typical conventional neural networks that are used to solve the ODEs. They are the Legendre polynomial based method and the MLP based method. In this section, we give a brief review on these methods.

The polynomial based methods: the Legendre polynomials. The Legendre polynomial is a typical class of orthogonal polynomials. The linear combination of Legendre polynomials up to order can be used as a effective function approximant. The Legendre polynomials read:

 P0(x)=1, (2.3)
 P1(x)=x, (2.4)

and

 Pn+1(x)=2n+1n+1xPn(x)−nn+1Pn−1(x), (2.5)

where is Legendre polynomials of order . The trial function based on the Legendre polynomials is

 ytrialLegendre(x)=yLegendre(x)(x−a)Ma(b−x)Mb+N(x), (2.6)

where

 yLegendre(x)=L∑j=0ωjPj(x) (2.7)

with the weight to be decided.

The neural network based methods: the MLP. The MLP is a classical neural network that consists of an input layer, several hidden layers, and an output layer. To obtain the next layer, a nonlinear activation function is applied to the linear combination of the previous layer. For example, a one-hidden layer MLP with the dimension of input layer and output layer is given as

 yMLP(x)=n∑j=1ωijσ(L∑i=1ωix+b1)+b2, (2.8)

where is the size of hidden layer and is the activation function. Usually, the activation function can be , , and for and for . The trial function of neural network based methods read:

 ytrialMLP(x)=yMLP(x)(x−a)Ma(b−x)Mb+N(x), (2.9)

In this work, we consider trial functions as follows:

 ytrial=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩yratio(x)(x−a)Ma(b−x)Mb+g(x)yLegendre(x)(x−a)Ma(b−x)Mb+g(x)yMLP(x)(x−a)Ma(b−x)Mb+g(x)

### 2.4 The loss function and the update of the weights

The general form of differential equations with boundary conditions can be expressed as

 F[x,y(x),y(1)(x),.....,y(k)(x)]=0,x∈[a,b] (2.10)

and

 C[x,y(x),y(1)(x),.....,y(l)(x)]=0,x=a,b, (2.11)

where is the order of the ODE and is the boundary condition of the ODE.

Since the trial function, Eq. (2.2), meet the boundary conditions automatically, in solving such ODEs, we only need to minimize the following loss function

 loss={F[x,ytrial(x),dytrialdx,.....,dnytrialdxn]}2 (2.12)

by training the ratio net.

The wrights in the ratio net are trained to minimize the loss function Eq. (2.12) through the gradient descent method:

 ω′=ω+k∂lossω. (2.13)

The iteration of weight in the ratio net is implemented through tensorflow. The details of the implementation of the algorithm will be given in the github for each illustrative example.

## 3 Illustrative examples

In this section, we apply the ratio net on various illustrative examples. Illustrative examples from Refs. [11, 25, 24, 26, 27] are included. We compare the ratio net with other representative methods. It shows that the ratio net is with higher efficiency.

### 3.1 Non-linear ordinary differential equations.

Example 1. We consider the ODE

 y′(x)=2y(x)−y2(x)+1 (3.1)

with boundary conditions

 y(0)=1+√2tanh[12log(√2−1√2+1)] (3.2)

and

 y(1)=1+√2tanh[√2+12log(√2−1√2+1)]. (3.3)

The exact solution of example 1 is

 y(x)=1+√2tanh[√2x+12log(√2−1√2+1)]. (3.4)

Fig. (3) shows the results given by three neural networks. The effectiveness is characterized by the fitting diagram and the relative error between the numerical solution and the analytical solution. The decreasing trend of the loss function shows the efficiency of the method. Figure 3: Comparison of the results given by the three neural networks of example 1. The learning rates are all 0.01.

Example 2. We consider the ODE

 y′′(x)=y3(x)−2y2(x)2x2 (3.5)

with boundary conditions

 y(0)=0 and y(2)=43, (3.6)

which has the exact solution . Fig. (4) shows the results. Figure 4: Comparison of the results given by the three neural networks of example 2. The learning rates are all 0.1.

Example 3. We consider the ODE

 y′′′(x)=−y2(x)−cos(x)+sin2(x) (3.7)

with boundary conditions

 y(0)=0, y′(0)=1, and y(π)=0, (3.8)

which has the exact solution . Fig. (5) shows the results. Figure 5: Comparison of the results given by the three neural networks of example 3. The learning rates are all 0.01.

### 3.2 High-order ordinary differential equations.

Example 4. We consider the ODE

 y(4)(x)=120x (3.9)

with boundary conditions

 y(−1)=1, y′(−1)=5, y(1)=3, and y′(1)=5, (3.10)

which has the exact solution . Fig. (6) shows the results. Figure 6: Comparison of the results given by the three neural networks of example 4. The learning rates are all 0.0001.

Example 5. We consider the ODE

 y(4)(x)=−x21+y2(x)−72(1−5x+5x2)+x21+(x−x2)6 (3.11)

with boundary conditions

 y(0)=0, y′(0)=0, y(1)=0, and y′(1)=0, (3.12)

which has the exact solution . Fig. (7) shows the results. Figure 7: Comparison of the results given by the three neural networks of example 5. The learning rates are all 0.0001.

Example 6. We consider the ODE

 y(4)(x)+y(x)=[(π2)4+1]cos(π2x) (3.13)

with boundary conditions

 y(−1)=0, y′(−1)=π2, y(1)=0, and y′(1)=−π2, (3.14)

which has the exact solution . Fig. (8) shows the results. Figure 8: Comparison of the results given by the three neural networks of example 6. The learning rates are all 0.0001.

Example 7. We consider the ODE

 y(4)(x)+y′(x)=4x3+24 (3.15)

with boundary conditions

 y(0)=0, y′(0)=0, y(1)=1, and y′(1)=4, (3.16)

which has the exact solution . Fig. (9) shows the results. Figure 9: Comparison of the results given by the three neural networks of example 7. The learning rates are all 0.0001.

### 3.3 Non-linear and high-order ordinary differential equations.

Example 8. We consider the ODE

 y(5)(x)=e−xy2(x) (3.17)

with boundary conditions

 (3.18)

which has the exact solution . Fig. (10) shows the results. Figure 10: Comparison of the results given by the three neural networks of example 8. The learning rates are all 0.01.

Example 9. We consider the ODE

 y(6)(x)+y(x)y′′(x)+y′(x)y(5)(x)−π2sin(πx)y′′′(x)+π1y2(x)=−π6cos(πx) (3.19)

with boundary conditions

 y(−1)=cos(−π), y′(−1)=−πsin(−π), y′′(−1)=−π2cos(−π), (3.20)
 y(1)=cos(π), y′(1)=−πsin(π), and y′′(1)=−π2cos(π), (3.21)

which has the exact solution . Fig. (11) shows the results. Figure 11: Comparison of the results given by the three neural networks of example 9. The learning rates are all 0.001.

### 3.4 Linear ordinary differential equations.

In this section, we consider several linear ODEs.

Example 10,

 y′′(x)−y′(x)=−2sin(x) (3.22)

with boundary conditions

 y(0)=−1 and y(π2)=1, (3.23)

which has the exact solution .

Example 11,

 y′(x)+15y=e−x5cos(x) (3.24)

with boundary conditions

 y(0)=0 and y(1)=sin(1)e1/5, (3.25)

which has the exact solution .

Example 12,

 y′(x)+(x+1+3x21+x+x3)y(x)=x3+2x+x2+3x41+x+x3 (3.26)

with boundary conditions

 y(0)=−1 and y(1)=1+13e1/2, (3.27)

which has the exact solution .

Example 13,

 y′(x)−sin(x)y(x)=2x−x2sin(x) (3.28)

with boundary conditions

 y(−1)=1 and y(1)=1, (3.29)

which has the exact solution .

Example 14,

 y′′(x)+xy′(x)−4y(x)=12x2−3x (3.30)

with boundary conditions

 y(0)=0 and y(2)=18, (3.31)

which has the exact solution .

Example 15,

 y′′(x)−y′(x)=−2sin(x) (3.32)

with boundary conditions

 y(0)=−1 and y(π2)=1, (3.33)

which has the exact solution .

Example 16,

 y′′(x)+2y′(x)+y(x)=x2+3x+1 (3.34)

with boundary conditions

 y(0)=0 and y(1)=−e−1+1, (3.35)

which has the exact solution .

Example 17,

 y′′(x)+15y′(x)+y(x)=−15e−x5cos(x) (3.36)

with boundary conditions

 y(0)=0 and y(2)=sin(2)e−25, (3.37)

which has the exact solution .

Example 18,

 y′(x)=y(x)−x2+1 (3.38)

with boundary conditions

 y(0)=0.5 and y(1)=4−π2, (3.39)

which has the exact solution . Fig. (12) shows the results of examples 11-18 given by the three neural networks. The effectiveness is characterized by the fitting diagram and the relative error between the analytical solution and the numerical solution. The decreasing trend of the loss function shows the efficiency of the method. Figure 12: The loss versus steps of examples 10 to 18. The learning rates of each examples may be different, but for the same examples, the learning rates of different models are same.

Example 19,

 y′′(x)+y(x)=2 (3.40)

with boundary conditions

 y(0)=1 and y(1)=0, (3.41)

which has the exact solution .

Example 20,

 y′′(x)+2xy′(x)+2y(x)=0 (3.42)

with boundary conditions

 y(0.001)=1 and y(1)=sin(√2)√2, (3.43)

which has the exact solution .

Example 21,

 y′(x)+cos(x)sin(x)y(x)=1sin(x) (3.44)

with boundary conditions

 y(1)=3sin(1) and y(2)=4sin(2), (3.45)

which has the exact solution .

Example 22,

 y′′(x)−11+exy′(x)−15e2x(1+ex)2y(x)=e2x(1+ex)6 (3.46)

with boundary conditions

 y(−1)=1(1+e−1)4 and y(0)=124, (3.47)

which has the exact solution .

Example 23,

 y′(x)=4x3−3x2+2 (3.48)

with boundary conditions

 y(0)=0 and y(1)=2, (3.49)

which has the exact solution .

Example 24,

 y′(x)=y(x) (3.50)

with boundary conditions

 y(0)=1 and y(1)=e, (3.51)

which has the exact solution .

Example 25,

 y′(x)=3cos(x)sin2(x)+6sin(3x) (3.52)

with boundary conditions

 y(0)=−3 and y(1)=−2cos(3)+sin3(1)−1, (3.53)

which has the exact solution

Example 26,

 y′′(x)=3x(1−y′(x))−27x3 (3.54)

with boundary conditions

 y(1)=2 and y(2)=274, (3.55)

which has the exact solution . Fig. (13) shows the results of examples 19-26. Figure 13: The loss versus steps of examples 19 to 26. The learning rates of each examples may be different, but for the same examples, the learning rates of different models are same.

## 4 Conclusions

In this paper, an effective and efficient method that solves the high-order and the non-linear ordinary differential equations is provided. The method is based on the ratio net. By comparing the method with existing methods such as the polynomial based method and the MLP based method, we show that the ratio net is both effective and efficient. In the following research, we use the ratio net to solve partial differential equations.

## 5 Acknowledgments

We are very indebted to Prof. Wu-Sheng Dai for his enlightenment and encouragement. We are very indebted to Prof. Guan-Wen Fang and Yong-Xie for their encouragements. This work is supported by Yunnan Youth Basic Research Projects (202001AU070020) and Doctoral Programs of Dali University (KYBS201910).

## References

•  J. M. Franco, Runge–kutta–nystrm methods adapted to the numerical integration of perturbed oscillators, Computer Physics Communications (2002).
•  Ledder, Differential equations : a modeling approach. McGraw-Hill Higher Education, 2005.
•  R. Duclous, B. Dubroca, and M. Frank, A deterministic partial differential equation model for dose calculation in electron radiotherapy, Physics in Medicine & Biology 55 (2010), no. 13 3843.
•  L. Zhao and A. Yin, High-order partial differential equation de-noising method for vibration signal, Mathematical Methods in the Applied Sciences (2015).
•  V. F. Zaitsev and A. D. Polyanin, Handbook of exact solutions for ordinary differential equations. CRC press, 2002.
•  G. Corliss and Y. Chang, Solving ordinary differential equations using taylor series, ACM Transactions on Mathematical Software (TOMS) 8 (1982), no. 2 114–144.
•  H. Lee and I. S. Kang, Neural algorithm for solving differential equations, Journal of Computational Physics 91 (1990), no. 1 110–131.
•  S. Mall and S. Chakraverty, Regression-based neural network training for the solution of ordinary differential equations, International Journal of Mathematical Modelling and Numerical Optimisation 4 (2013), no. 2 136–149.
•  S. Chakraverty and S. Mall, Regression-based weight generation algorithm in neural network for solution of initial and boundary value problems, Neural Computing and Applications 25 (2014), no. 3 585–594.
•  S. Mall and S. Chakraverty, Application of legendre neural network for solving ordinary differential equations, Applied Soft Computing (2016) 347–356.
•  Y. Yang, M. Hou, and J. Luo, A novel improved extreme learning machine algorithm in solving ordinary differential equations by legendre neural network methods, Advances in Difference Equations 2018 (2018), no. 1.
•  M. Kumar and N. Yadav,

Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: a survey

, Computers & Mathematics with Applications 62 (2011), no. 10 3796–3811.
•  H. Alli, A. U?ar, and Y. Demir, The solutions of vibration control problems using artificial neural networks, Journal of the Franklin Institute 340 (2003), no. 5 307–325.
•  Y. Shirvany, M. Hayati, and R. Moradian, Multilayer perceptron neural networks with novel unsupervised training method for numerical solution of the partial differential equations, Applied Soft Computing 9 (2009), no. 1 20–29.
•  T. Schneidereit and M. Breuß,

Solving ordinary differential equations using artificial neural networks-a study on the solution variance

, in Proceedings of the Conference Algoritmy, pp. 21–30, 2020.
•  F. B. Rizaner and A. Rizaner, Approximate solutions of initial value problems for ordinary differential equations using radial basis function networks, Neural Processing Letters (2017).
•  S. Mall and S. Chakraverty, Single layer chebyshev neural network model for solving elliptic partial differential equations, Neural Processing Letters (2016).
•  S. Mall and S. Chakraverty, Chebyshev neural network based model for solving lane–emden type equations, Applied Mathematics and Computation 247 (2014) 100–114.
•  S. Chakraverty and S. Mall, Single layer chebyshev neural network model with regression-based weights for solving nonlinear ordinary differential equations, Evolutionary Intelligence 13 (2020), no. 4.
•  M. Juhola, J. C. Patra, P. K. Meher, J. Patra, and P. Meher, Intelligent sensors using computationally efficient chebyshev neural networks, .
•  A. Malek and R. S. Beidokhti, Numerical solution for high order differential equations using a hybrid neural network—optimization method, Applied Mathematics and Computation 183 (2006), no. 1 260–271.
•  C.-C. Zhou, H.-L. Tu, Y. Liu, and J. Hua, Activation functions are not needed: the ratio net, arXiv preprint arXiv:2005.06678 (2020).
•  C.-C. Zhou and Y. Liu, The pade approximant based network for variational problems, arXiv preprint arXiv:2004.00711 (2020).
•  S. Ezadi and N. Parandin, An application of neural networks to solve ordinary differential equations, .
•  M. A. Ramadan, K. R. Raslan, T. S. El Danaf, and M. A. Abd, El Salam, An exponential chebyshev second kind approximation for solving high-order ordinary differential equations in unbounded domains, with application to dawson’s integral, Journal of the Egyptian Mathematical Society 25 (2016), no. 2.
•  A. Akyuz-Dacolu and H. erdik Yaslan, The solution of high-order nonlinear ordinary differential equations by chebyshev series, Applied Mathematics Computation 217 (2011), no. 12 5658–5666.
•  H. F. Parapari and M. B. Menhaj, Solving nonlinear ordinary differential equations using neural networks, in International Conference on Control, 2016.