 # The Pade Approximant Based Network for Variational Problems

In solving the variational problem, the key is to efficiently find the target function that minimizes or maximizes the specified functional. In this paper, by using the Pade approximant, we suggest a methods for the variational problem. By comparing the method with those based on the radial basis function networks (RBF), the multilayer perception networks (MLP), and the Legendre polynomials, we show that the method searches the target function effectively and efficiently.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many problems arising as variational problems, such as the principle of minimum action in theoretical physics and the optimal control problem in engineering. In solving the variational problem, the key is to efficiently find the target function that minimizes or maximizes the specified functional.

In using non-analytical methods to search for the target function, one faces two problems: firstly, to ensure that the target function is in the range of searching. Secondly, to ensure that the target function can be found under limited computing power and time. The first one is a problem of effectiveness and the second is a problem of efficiency

The direct method of Ritz and Galerkin [1, 2, 3] tries to express the target function as linear combinations of basis functions and reduces the problem to that of solving equations of coefficients. However, the basis function is determined by the boundary condition, as a result, the target function might not be expressed by the basis function and thus be excluded in the range of searching.

One, to improve the method along this line, uses Walsh functions , orthogonal polynomials [5, 6, 7, 8], and fourier series [9, 10] that are complete and orthogonal to express the target function and converts the boundary condition into a constraint of the coefficient. In that approach, it is the completeness of the basis function that ensures the effectiveness.

Recently, the multilayer perception networks (MLP) is used to solve the variational problem [11, 12, 13]

. At this case, the functional becomes the loss function and the boundary condition usually becomes an extra term added to the loss function. The MLP is trained to learn the shape of the target function and meet the boundary condition simultaneously. In that approach, it is the universal approximation property of the MLP

[14, 15, 16, 17, 18] that guarantees the effectiveness.

In a word, the completeness of the basis function or the universal approximation property of the neural networks already solves the problem of effectiveness. Now the problem of efficiency is to be considered. For example, in searching the target function with MLP, the network might fall into the local minimum instead of the global minimum.

In this paper, by using the Pade approximant, we suggest a methods for the variational problem. By comparing the method with those based on the radial basis function networks (RBF), the multilayer perception networks (MLP), and the Legendre polynomials, we show that the method searches the target function effectively and efficiently.

This paper is organized as following. In Sec. 2, we introduce the main method, where the effective expression of the target function is constructed. In Sec. 3, we solve the illustrative examples. Conclusions and outlooks are given in Sec. 4.

## 2 The main method

In this section, we show the detail of constructing an efficient expresion of the target function based on the Pade approximant.

### 2.1 the Pade approximant: a brief review

The Pade approximant is a rational function of numerator degree and denominator degree [19, 20],

where , , , and are parameters. For the sake of convenience, we denote the structure of the Pade approximant as Pade-.

Normally, the Pade- ought to fit a power series through the orders ,,,, , that is

 ∞∑i=0cixi=∑mj=1wjxj+b1∑ni=1w′ixi+b2+O(xm+n+1). (2.2)

It is an highly efficient tool to approximate a complex real function with only countable parameters and has wide applications in many problems [21, 22, 23, 24, 25]. Here, we use it as an approximator for the target function.

### 2.2 The RBF, the MLP, and the Legendre Polynomial: brief reviews

In order to compare the method with those based on the radial basis function networks (RBF), the multilayer perception networks (MLP), and the Legendre polynomials, we give a brief review on the RBF, the MLP, and the Legendre Polynomial.

The MLP.

The MLP or multilayer feed-forward networks is a typical feed-forward neural network. It transforms an

-dimensional input to a -dimensional output and implements a class of mappings from to [14, 15, 16, 17, 18]

. The building block of the MLP is neurons where a linear and non-linear transforms are successively applied on the input. A collection of neurons forms a layer and a collection of layers gives a MLP. For example, for a one-layer MLP with the number of hidden node, the neuron in the layer,

, the relation between the input and the output can be explicitly written as

 ymlp(x)=l∑i=1w′iσ(n∑j=1wijxj+b1)+b2, (2.3)

where is a -dimensional output in this case,

is the non-linear map called the active function and is usually chosen to be

or

 sigmoid(x)=11+e−x. (2.4)

, , , and are parameters. By tuning the parameters, the MLP is capable of approximating a target function. For the sake of convenience, we denote the structure of the MLP as MLP-. E.g., a two-layer MLP with neurons in each layer and activate functions both is MPL-.

The RBF. Beyond the MLP, RBF or the radial basis function networks is another typical neural network. Similarly, it also transforms an -dimensional input to a -dimensional output and implements a class of mappings from to [26, 27, 28]. However, the structure of the RBF is different: the distance between the input and the center are transformed by a kernel function. The linear combination of the result of the kernel function gives the output . For example, for a RBF with centers, the relation between the input and the output can be explicitly written as

 yrbf(x)=l∑j=1wjexp⎡⎣−∑ni=1(xi−cji)22σj⎤⎦+b, (2.5)

where, the kernel function is the Gauss function in this case. The RBF is also a good approximator [26, 27, 28] and, at some cases, more efficient than the MLP. For the sake of convenience, we denote the structure of the RBF as RBF-.

The Legendre polynomial. In real analysis, a real function can be expressed as a linear combination of basis such as complete polynomials . The Legendre polynomial is complete and orthogonal. It satisfies the recurrent relations 

 Pn+1(x)=2n+1n+1xPn(x)−nn+1Pn−1(x) (2.6)

for , , , , where is Legendre polynomial of order , , and . In this work, we express the target function as

 ylegend(x)=m∑j=1wjPj(x)+b (2.7)

with and being parameters. For the sake of convenience, we denote the structure as Leg-.

The power polynomial. For the reader, the power polynomial is a familiar tool to approximate a function. For example, the Taylor expansion, a textbook content, is based on it. Here, in order to show that the methods such as the MLP and the RBF are nothing mysterious but merely an approximator, we give the result based on the power polynomial. We express the target function as

 ypoly(x)=m∑j=1wjxj+b, (2.8)

where and are parameters. We show that the neural-network method differs from the power-polynomial method only in efficiency. For the sake of convenience, we denote the structure as Poly-.

### 2.3 The expression of the target function

In searching the target function that minimizes or maximizes the specified functional numerically, the parameter is tuned to shape the output function. However, the boundary condition might reduce the efficiency, because it becomes an extra constraint on the parameter, i.e., the parameter now is tuned not only to shape the function, but also to move the function to the fixed point. In this section, we suggest a expression for the target function which has the universal approximation property and satisfies the boundary condition automatically. With this approach, the boundary condition is no more an extra constraint on the parameter.

There are various kinds of boundary conditions in the variational problem. Here, without loss of generality, we focus on -dimensional problems with the fix-end boundary condition.

The boundary factor. We introduce the boundary factor,

 bound(x)=(x−xa)ma(xb−x)mb, (2.9)

where and are parameters. and are boundaries of . We, for the sake of convenience, rewrite the output of Eqs. (2.1)-(2.8) as

 ynet(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩ypade(x)yMLP(x)yRBF(x)ylegendre(x)ypoly(x). (2.10)

Multiplying the boundary factor, Eq. (2.9), to the output ensures that the output function passes through points and .

The construction. In order to pass through the fix-end points and , we add a function

 g(x)=x∗yb−yaxb−xa+xbya−ybxaxb−xa (2.11)

to the output. Finally, the expression of the target function reads

 ynet_final(x)=ynet(x)∗bound(x)+g(x). (2.12)

Eq. (2.12) inherits the good approximate ability from the Pade approximant, the MLP and so on and passes through the fixed-end point simultaneously.

### 2.4 The loss function and the learn algorithm

The loss function. The specified functional now read

where is the specified function of , , and so on. In order to conduct a numerical computation, Eq. (2.13) is approximated by a summation

 loss=(xb−xa)NN∑i=1F[ynet_final(xi),y′net_final(xi),…], (2.14)

where is the number of sample points, is sampled uniformly from . Thus the variational problem converted into a optimization problem:

 minw,b,mloss. (2.15)

The gradient descent method and the back-propagation algorithm. We use the gradient descent method to find the optimal parameter, e.g., the parameter is updated by the following equation

 wnew=wold+l∂lossw∣∣∣w=wold, (2.16)

where is the learning rete. and are the old and new parameters after one step respectively. In Eq. (2.16), is calculated by the back-propagation algorithm

 ∂lossw=∂loss∂ynet_final∂ynet_final∂w+∂loss∂y′net_final∂y′net_final∂w+…. (2.17)

An implementation based on python and tensorflow is given in

github. In the implementation, the back-propagation algorithm is automatically processed and the Adam algorithm, a developed gradient descent method is applied.

## 3 The illustrative example

In this section, we use the method to solve variational problems that are partly collected from the literatures [5, 6, 7, 8, 9, 10, 11, 12, 13]

1) The shortest path problem. The functional reads

 J=∫1−1√1+(y′)2dx (3.1)

with boundary condition

 y(−1)=0 and y(1)=2. (3.2)

Exact results are

 yexact(x) =x+1, J(yexact) =2√2≃2.8284. (3.3)

The is a straight line at this case, however, it dose not mean that the task is simple, because to find the target function without any pre-knowledge is much more difficult than to learn to express a known target function.

Numerical results are

The efficiency of each methods are show in Fig. (1)

From Fig. (1), one can see that the method based on the Pade approximant converges faster than those based on the RBF and the MLP. In the method based on Legendre polynomials and the power polynomials the initial value happens to be the target function.

2) The minimum drag problem. The functional reads

 J=∫10yy′3dx

with boundary condition

 y(0)=0 and y(1)=1. (3.4)

Exact results are

 yexact(x) =x3/4, J(yexact) =2764≃0.4219. (3.5)

Numerical results are

The efficiency of each methods are show in Fig. (2)

From Fig. (2), one can see the method based on the Pade approximant again converges faster. The method based on the power polynomials converges very slow at this time. Since we have shown that the method is capable to find the target function, in the later examples, we eliminate this method because it converges so slow.

3) A popular illustrative example. The functional reads

 J=∫10(y′2+xy′)dx

with boundary condition

 y(0)=0 and y(1)=14. (3.6)

Exact results are

 yexact(x) =12x(1−12x), J(yexact) =53≃1.667

Numerical results are

The efficiency of each methods are show in Fig. (3)

 J=∫π/2−π/2[y′2−2ycos(x+π2)]dx

with boundary condition

 y(π/2)=0 and y(−π/2)=0. (3.7)

Exact results are

 yexact(x) =cos(x+π2)+2πx, J(yexact) =4π−π2≃−0.2976. (3.8)

Numerical results are

The efficiency of each methods are show in Fig. (4)

 J=∫10(y′2−y2−2xy)dx

with boundary condition

 y(0)=0 and y(1)=0. (3.9)

Exact results are

 yexact(x) =sinxsin1−x, J(yexact) =cot1−23≃−0.0246. (3.10)

Numerical results are

The efficiency of each methods are show in Fig. (5)

## 4 Conclusions and outlooks

In solving the variational problem, the key is to efficiently find the target function that minimizes or maximizes the specified functional. Problems of effectiveness and efficiency are both important. In this paper, by using the Pade approximant, we suggest a methods for the variational problem. In this approach, the fix-end boundary condition is satisfied simultaneously. By comparing the method with those based on the radial basis function networks (RBF), the multilayer perception networks (MLP), and the Legendre polynomials, we show that the method searches the target function effectively and efficiently.

The method shows that the Pade approximant can improve the efficiency of neural network. In solving a many-body system numerically in physics, the efficiency of the method is important, because the degree of freedom in such system is large. The method could be used in searching the wave function of a many-body system efficiently. Moreover, it could be used in other tasks, such as the task of classification and translation.

## 5 Acknowledgments

We are very indebted to Dr Dai for his enlightenment and encouragement.

## References

•  I. M. Gelfand, R. A. Silverman, et al., Calculus of variations. Courier Corporation, 2000.
•  L. D. Elsgolc, Calculus of variations. Courier Corporation, 2012.
•  M. Giaquinta and S. Hildebrandt, Calculus of variations II, vol. 311. Springer Science & Business Media, 2013.
•  C. Chen and C. Hsiao, A walsh series direct method for solving variational problems, Journal of the Franklin Institute 300 (1975), no. 4 265–280.
•  R. Chang and M. Wang, Shifted legendre direct method for variational problems, Journal of Optimization Theory and Applications 39 (1983), no. 2 299–307.
•  I.-R. HORNG and J.-H. CHOU, Shifted chebyshev direct method for solving variational problems, International Journal of systems science 16 (1985), no. 7 855–861.
•  C. Hwang and Y. Shih, Laguerre series direct method for variational problems, Journal of Optimization Theory and Applications 39 (1983), no. 1 143–149.
•  M. Razzaghi and S. Yousefi, Legendre wavelets direct method for variational problems, Mathematics and computers in simulation 53 (2000), no. 3 185–192.
•  M. Razzaghi and M. Razzaghi, Fourier series direct method for variational problems, International Journal of Control 48 (1988), no. 3 887–895.
•  C.-H. Hsiao, Haar wavelet direct method for solving variational problems, Mathematics and computers in simulation 64 (2004), no. 5 569–585.
•  E. Weinan and B. Yu,

The deep ritz method: a deep learning-based numerical algorithm for solving variational problems

, Communications in Mathematics and Statistics 6 (2018), no. 1 1–12.
•  R. Lopez, E. Balsa-Canto, and E. Oñate, Neural networks for variational problems in engineering, International Journal for Numerical Methods in Engineering 75 (2008), no. 11 1341–1360.
•  R. L. Gonzalez, Neural networks for variational problems in engineering. PhD thesis, Universitat Politècnica de Catalunya (UPC), 2009.
•  K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural networks 4 (1991), no. 2 251–257.
•  H. White, Connectionist nonparametric regression: Multilayer feedforward networks can learn arbitrary mappings, Neural networks 3 (1990), no. 5 535–549.
•  K. Hornik, M. Stinchcombe, and H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural networks 3 (1990), no. 5 551–560.
•  M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural networks 6 (1993), no. 6 861–867.
•  K. Hornik, M. Stinchcombe, H. White, et al., Multilayer feedforward networks are universal approximators., Neural networks 2 (1989), no. 5 359–366.
•  G. A. Baker, G. A. Baker Jr, G. Baker, P. Graves-Morris, and S. S. Baker, Pade Approximants: Encyclopedia of Mathematics and It’s Applications, Vol. 59 George A. Baker, Jr., Peter Graves-Morris, vol. 59. Cambridge University Press, 1996.
•  C. Brezinski,

History of continued fractions and Padé approximants

, vol. 12.
Springer Science &amp; Business Media, 2012.
•  R. P. Brent, F. G. Gustavson, and D. Y. Yun, Fast solution of toeplitz systems of equations and computation of padé approximants, Journal of Algorithms 1 (1980), no. 3 259–295.
•  B. Cochelin, N. Damil, and M. Potier-Ferry, Asymptotic–numerical methods and pade approximants for non-linear elastic structures, International journal for numerical methods in engineering 37 (1994), no. 7 1187–1213.
•  P. Langhoff and M. Karplus, Padé approximants for two-and three-body dipole dispersion interactions, The Journal of Chemical Physics 53 (1970), no. 1 233–250.
•  J. J. Loeffel, A. Wightman, B. Simon, and A. Martin, Padé approximants and the anharmonic oscillartor, Phys. Lett. B 30 (1969), no. CERN-TH-1103 656–658.
•  H. Vidberg and J. Serene, Solving the eliashberg equations by means ofn-point padé approximants, Journal of Low Temperature Physics 29 (1977), no. 3-4 179–192.
•  J. Park and I. W. Sandberg, Universal approximation using radial-basis-function networks, Neural computation 3 (1991), no. 2 246–257.
•  M. J. Orr et al., Introduction to radial basis function networks, 1996.
•  J. Park and I. W. Sandberg, Approximation and radial-basis-function networks, Neural computation 5 (1993), no. 2 305–316.
•  C. F. Dunkl and Y. Xu, Orthogonal polynomials of several variables. No. 155. Cambridge University Press, 2014.
•  E. W. Weisstein, Legendre polynomial, .