DeepAI
Log In Sign Up

An iterative approximate method of solving boundary value problems using dual Bernstein polynomials

In this paper, we present a new iterative approximate method of solving boundary value problems. The idea is to compute approximate polynomial solutions in the Bernstein form using least squares approximation combined with some properties of dual Bernstein polynomials which guarantee high efficiency of our approach. The method can deal with both linear and nonlinear differential equations. Moreover, not only second order differential equations can be solved but also higher order differential equations. Illustrative examples confirm the versatility of our method.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/18/2021

Spectral Monic Chebyshev Approximation for Higher Order Differential Equations

This paper is focused on performing a new method for solving linear and ...
02/06/2020

Uncovering differential equations from data with hidden variables

Finding a set of differential equations to model dynamical systems is a ...
06/18/2021

A transformation-based approach for solving stiff two-point boundary value problems

A new approach for solving stiff boundary value problems for systems of ...
05/25/2020

Complexes from complexes

This paper is concerned with the derivation and properties of differenti...
05/19/2021

Recursion formulas for integrated products of Jacobi polynomials

From the literature it is known that orthogonal polynomials as the Jacob...
12/14/2020

Recursive computation of the Hawkes cumulants

We propose a recursive method for the computation of the cumulants of se...
10/09/2020

Accelerated computational micromechanics

We present an approach to solving problems in micromechanics that is ame...

1 Introduction

In the paper, we consider the following form of the boundary value problem.

Problem 1.1.

[Boundary value problem] Solve the th order differential equation

(1.1)

with the boundary conditions

(1.2)
(1.3)

where , and . The differential equation (1.1) may be nonlinear.

The goal of the paper is to present a new iterative approximate method of solving Problem 1.1. The method is based on least squares approximation, and complies with the requirements given below.

  1. An approximate polynomial solution in the Bernstein form is computed. Therefore, the solution is given in the whole interval , not only at certain points.

  2. The degree of the approximate polynomial solution can be chosen arbitrarily.

  3. The method deals with both linear and nonlinear differential equations (1.1).

  4. Not only second order differential equations (1.1) can be solved but also higher order differential equations (1.1).

  5. High efficiency is achieved thanks to some properties of Bernstein and dual Bernstein polynomials.

As we shall see, the Bernstein form is a very suitable choice for an approximate polynomial solution of Problem 1.1. Here are the most important reasons.

  1. The boundary conditions (1.2) and (1.3) can be easily satisfied by a polynomial in the Bernstein form.

  2. Due to some properties of dual Bernstein polynomials, the cost of an iteration of our algorithm is , where is the degree of the approximate polynomial solution in the Bernstein form which is derived in the iteration. The iteration is based, among other things, on solving a least squares approximation problem. Note that the standard method of dealing with the least squares approximation is to solve the system of normal equations with the complexity .

  3. Bernstein polynomials can be easily differentiated and integrated.

In recent years, the idea of representing approximate solutions of differential equations in the Bernstein basis has been used in several papers (see, e.g., BB06 ; BB07 ; TS17 ). The least squares approximation and orthogonality have been used for many years usually in conjunction with Chebyshev polynomials and Picard‘s iteration (see, e.g., CN63 ; Fuk97 ; JYWB13 ; Wri64 ). However, according to the authors‘ knowledge, there are no methods based on the idea presented in this paper.

The paper is organized as follows. Section 2 has a preliminary character. In Section 3, we give a description of our method including algorithmic details of the implementation. Several illustrative examples are presented in Section 4. For a brief summary of the paper, see Section 5.

2 Preliminaries

Let denote the space of all polynomials of degree at most . Bernstein polynomials of degree are defined by

(2.1)

and form a basis of . These polynomials are known for their important applications in computer-aided design and approximation theory (see, e.g., Far12 ). In each iteration, our method computes coefficients of the Bernstein form of an approximate solution

(2.2)

of Problem 1.1.

Now, let us deal with the boundary conditions (1.2) and (1.3). This can be done easily thanks to some properties of the Bernstein polynomials (2.1). See Lemmas 2.1 and 2.2.

Lemma 2.1 (e.g., (Far02, , §5.3)).

The th derivative of the polynomial (2.2) is given by the formula

where

(2.3)

with the forward difference operator that acts on the first index (second index being fixed),

(2.4)

Moreover,

(2.5)
(2.6)
Lemma 2.2.

The polynomial (2.2), which has the outer coefficients

(2.7)
(2.8)

and arbitrary inner coefficients , satisfies the boundary conditions

(2.9)
(2.10)

(cf. (1.2) and (1.3)).

Proof.

We apply the formulas (2.5) and (2.6) to (2.9) and (2.10), respectively. After some algebra, we derive (2.7) and (2.8). ∎

Remark 2.1.

As is known, the boundary value problem with nonhomogeneous boundary conditions (1.2) and (1.3) (see Problem 1.1) can always be converted to the boundary value problem with homogeneous boundary conditions, i.e., the conditions with and (cf. (2.9) and (2.10)). To satisfy these homogeneous boundary conditions, no computations are required since the outer coefficients of a polynomial solution in the Bernstein form are equal to (cf. (2.7) and (2.8)). This is one of the reasons why the Bernstein basis is so suitable for boundary value problems.

According to Lemma 2.2, the outer coefficients and depend only on the boundary conditions (1.2) and (1.3). As a result, these coefficients can be easily computed at the beginning of each iteration of the algorithm. What remains is to compute efficiently the optimal inner coefficients . As we shall see, dual Bernstein polynomials are of great importance in this approximation process.

Let us define the inner product by

(2.11)

The dual Bernstein polynomial basis of degree (see Cie87 )

(2.12)

satisfies the relation

where equals if , and otherwise. According to the next lemma, dual Bernstein polynomials (2.12) can be efficiently represented in the Bernstein basis.

Lemma 2.3 (Lw11 ).

Dual Bernstein polynomials (2.12) have the Bernstein representation

where the coefficients satisfy the recurrence relation

with

We adopt the convention that if , or , or . The starting values are

Notice that Lemma 2.3 results in an efficient algorithm of computing the connection coefficients with the complexity (cf. (LW11, , Algorithm 3.3 for )).

Recently, dual Bernstein polynomials have been extensively studied because of their applications in numerical analysis and computer-aided design (see, e.g., GLW15 ; GLW17 ; LW11 ; WGL15 ; WL09 ). In this paper, we present another application of these dual bases.

3 An iterative approximate method of solving boundary value problems using dual Bernstein polynomials

In this section, we first present a sketch of the idea for our iterative approximate method of solving Problem 1.1. Then, we give algorithmic details of the implementation.

Let us assume that we are looking for the approximate polynomial solution of degree ,

(3.1)

i.e., the goal is to compute the coefficients . Recall that is the order of the differential equation (1.1), and .

We begin with the polynomial of degree ,

where the coefficients are given by the formulas (2.7) and (2.8) for . Therefore, depends only on the boundary conditions (1.2) and (1.3), and the least squares approximation is not involved here.

Next, for , the goal is to compute the approximate solution assuming that the approximate solution was computed in the previous iteration of the algorithm. The approximate solution

(3.2)

must satisfy

  • the optimality condition

    (3.3)

    where is the norm (cf. (2.11)),

  • the boundary conditions

    (3.4)
    (3.5)

    (cf. (1.2) and (1.3)).

According to Lemma 2.2, the outer coefficients and are given by (2.7) and (2.8), respectively. Now, the boundary conditions (3.4) and (3.5) are satisfied, and the goal is to compute the inner coefficients so that the optimality condition (3.3) is satisfied. Theorem 3.1 shows how to deal with this problem efficiently.

Theorem 3.1.

The inner coefficients of the approximate solution (see (3.2)) satisfy the Toeplitz system of linear equations

(3.6)

where , , ,

(3.7)

with as defined in Lemma 2.3, and

(3.8)
Proof.

First, using Lemma 2.1, we obtain

where

(3.9)

Next, must satisfy the optimality condition (3.3). Remembering that and are dual bases of the space , and using the Bernstein representation of (see Lemma 2.3), we derive the formulas for the optimal values of the coefficients ,

(3.10)

Now, we equate (3.9) to (3.10), and obtain the system (3.6). Finally, since if or , it can be easily checked that if or (see (3.7)). Therefore, (3.6) is a Toeplitz system of linear equations. ∎

Remark 3.1.

Since (3.6) is a Toeplitz system of linear equations, it can be solved, in general, with the complexity using generalized Levinson‘s algorithm (see, e.g., (PTVF07, , §2.8)). There are also algorithms that solve Toeplitz systems of linear equations with the complexity (see, e.g., Bun85 ; Hoog87 and the lists of references given there), but in the context of our problem their significance is only theoretical. Moreover, since is associated with the forward difference operator (see (2.4) and (3.7)), in some cases it can be inverted using explicit formulas (see, e.g., All73 ; HP72 ).

Now, let us assume that which is the most common case for our problem. The bandwidth of is thus very small. Consequently, the best option in practice is to solve the system (3.6) using Gaussian elimination for band matrices with the complexity (see, e.g., (GL96, , §4.3)). Furthermore, let us list some special cases that can be treated separately in order to speed up the computations.

  1. If , then is an upper triangular matrix, and the system (3.6) can be solved using back substitution with the complexity (see, e.g., (GL96, , §4.3)).

  2. If , then is a lower triangular matrix, and the system (3.6) can be solved using forward substitution with the complexity (see, e.g., (GL96, , §4.3)).

  3. If , then is a tridiagonal matrix, and the system (3.6) can be solved with the complexity (see, e.g., (PTVF07, , §2.4)).

Remark 3.2.

Notice that the computation of (3.8) requires a method of computing the collection of integrals

where . Since we do not assume anything about , it is impossible to recommend one specific method working for every example. However, Gaussian quadratures (see, e.g., (PTVF07, , §4.6)) or well-developed algorithms provided by some computing environments may be good choices for many examples. We used Maple13 function int with the option numeric, and the results look fine (see Section 4).

Remark 3.3.

Observe that in (3.8) requires . According to Lemma 2.1, these polynomials can be represented in the Bernstein form. The coefficients of these representations can be computed using (2.3) with (2.4). However, this approach is inefficient, i.e., the complexity is . Note that it is much more efficient to put in a table, and complete this table using the well-know recurrence relation for the forward difference operator,

where . The complexity of this approach is . Moreover, the application of a quadrature to (3.8) (see Remark 3.2) requires a method of evaluating a polynomial in the Bernstein form. One can use Horner-like scheme (see, e.g., (Gol03, , §5.4.2)) which has the complexity , or de Casteljau‘s algorithm (see, e.g., (Gol03, , §5.1)) which has the complexity .

Now, we summarise the whole idea in Algorithm 3.1.

Algorithm 3.1.

Input: , , , , ,
Output: the coefficients of (see (3.1))

Step I.

Compute the coefficients of by (2.7) and (2.8) for , where .

Step II.

For ,

  1. compute the outer coefficients and of by (2.7) and (2.8), respectively;

  2. compute using Lemma 2.3;

  3. compute the coefficients of using Remark 3.3;

  4. compute by (3.8) (see Remarks 3.2 and 3.3);

  5. compute the inner coefficients of by solving the Toeplitz system of linear equations (3.6) (see Remark 3.1).

Step III.

Return the coefficients of .

4 Examples

Results of the experiments were obtained in Maple13 using -digit arithmetic. Integrals (3.8) were computed using the Maple13 function int with the option numeric. Further on in this section, we will use the following notation:

(4.1)

is the error function, and

(4.2)

is the maximum error, where with .

Example 4.1.

First, let us consider the following second order nonlinear differential equation:

with the boundary conditions

According to (GGL85, , chapter II, §4),

is the exact solution. We have computed the approximate solutions . Fig. 1 illustrates the error functions , and . For the list of maximum errors (4.2), see Table 1.

(a)
(b)
(c)
Figure 1: Plots of (a) , (b) , and (c) for Example 4.1.
Example 4.2.

Now, let us solve the following fourth order linear differential equation:

with the boundary conditions

One can check that the exact solution is

(cf. (ZC08, , §4.3)). The maximum errors for our approximate solutions are shown in Table 1. Plots of the selected error functions (4.1) can be seen in Fig. 2.

(a)
(b)
(c)
Figure 2: Plots of (a) , (b) , and (c) for Example 4.2.
Example 4.3.

The following fourth order nonlinear differential equation:

with the conditions

has the exact solution

(cf. (PV95, , §4.2.1.18)). Let us consider the approximate solutions . Selected error functions (4.1) are shown in Fig. 3. The list of maximum errors (4.2) is given in Table 1.

(a)
(b)
(c)
Figure 3: Plots of (a) , (b) , and (c) for Example 4.3.
Example 4.4.

Next, we apply the algorithm to the following third order linear differential equation:

with the conditions

The exact solution is

where

with

(cf. (AS72, , §10.4.57)). Here and are Airy functions of the first and second kind, respectively (for details, see, e.g., (AS72, , §10.4)), and is the gamma function (see, e.g., (AS72, , §6.1)). Values of the Airy functions were computed using AiryAi and AiryBi procedures provided by Maple13. Let us consider the approximate solutions . For the plots of the error functions , and , see Fig. 4. Maximum errors (4.2) are listed in Table 1.

(a)
(b)
(c)
Figure 4: Plots of (a) , (b) , and (c) for Example 4.4.
Example 4.5.

Finally, let us consider the following second order linear differential equation:

subject to the boundary conditions

with the exact solution

(cf. (GR07, , §8.49)). Here and are Bessel functions of order of the first and second kind, respectively (for details, see, e.g., (PV95, , §2.1.2.121)). Values of the Bessel functions were computed using BesselJ and BesselY procedures provided by Maple13. Maximum errors (4.2), and plots of the selected error functions (4.1) for our approximate solutions can be found in Table 1 and Fig. 5, respectively.

(a)
(b)
(c)
Figure 5: Plots of (a) , (b) , and (c) for Example 4.5.
Example 4.1 Example 4.2 Example 4.3 Example 4.4 Example 4.5
—– —– —–
—– —–
Table 1: Maximum errors of the approximate solutions of the problems from Examples 4.14.5.

5 Conclusions and future work

We presented a new iterative approximate method of solving boundary value problems. In each iteration, the method computes efficiently an approximate polynomial solution in the Bernstein form using least squares approximation combined with some properties of dual Bernstein polynomials. Both linear and nonlinear differential equations of any order can be solved. Examples confirmed that the method is versatile. Some of them showed that the method can handle differential equations having quite complicated exact solutions written in terms of special functions, such as Airy and Bessel functions.

In recent years, there has been a progress in the area of dual bases in general (see, e.g., Ker13 ; Woz13 ; Woz14 ). Notice that our idea could be generalized to work for any pair of dual bases, not only Bernstein and dual Bernstein bases. However, the algorithm would be much more complicated. For example, dealing with the boundary conditions would be much more difficult, since we would not be able to use Lemma 2.2. In the near future, we intend to study this generalization.

References

  • [1] M. Abramowitz, I. A. Stegun (eds.), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, tenth printing, National Bureau of Standards, 1972.
  • [2] E. L. Allgower, Exact inverses of certain band matrices, Numerische Mathematik 21 (1973), 279–284.
  • [3] D. D. Bhatta, M. I. Bhatti, Numerical solution of KdV equation using modified Bernstein polynomials, Applied Mathematics and Computation 174 (2006), 1255–1268.
  • [4] M. I. Bhatti, P. Bracken, Solutions of differential equations in a Bernstein polynomial basis, Journal of Computational and Applied Mathematics 205 (2007), 272–280.
  • [5] J. R. Bunch, Stability of Methods for Solving Toeplitz Systems of Equations, SIAM Journal on Scientific and Statistical Computing 6 (1985), 349–364.
  • [6] Z. Ciesielski, The basis of B-splines in the space of algebraic polynomials, Ukrainian Mathematical Journal 38 (1987), 311–315.
  • [7]

    C. W. Clenshaw, H. J. Norton, The solution of nonlinear ordinary differential equations in Chebyshev series, The Computer Journal 6 (1963), 88–92.

  • [8] F. de Hoog, A new algorithm for solving Toeplitz systems of equations, Linear Algebra and its Applications 88–89 (1987), 123–138.
  • [9] G. E. Farin, Curves and Surfaces for Computer-Aided Geometric Design. A Practical Guide, fifth edition, Academic Press, 2002.
  • [10] R. T. Farouki, The Bernstein polynomial basis: A centennial retrospective, Computer Aided Geometric Design 29 (2012), 379–419.
  • [11] T. Fukushima, Picard iteration method, Chebyshev polynomial approximation, and global numerical integration of dynamical motions, The Astronomical Journal 113 (1997), 1909–1914.
  • [12] R. Goldman, Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling, Elsevier, 2003.
  • [13] G. H. Golub, C. F. Van Loan, Matrix Computations, third edition, The Johns Hopkins University Press, 1996.
  • [14] P. Gospodarczyk, S. Lewanowicz, P. Woźny, -constrained multi-degree reduction of Bézier curves, Numerical Algorithms 71 (2016), 121–137.
  • [15] P. Gospodarczyk, S. Lewanowicz, P. Woźny, Degree reduction of composite Bézier curves, Applied Mathematics and Computation 293 (2017), 40–48.
  • [16] I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products, seventh edition, Academic Press, 2007.
  • [17] A. Granas, R. Guenther, J. Lee, Nonlinear boundary value problems for ordinary differential equations. Dissertationes Mathematicae CCXLIV, PWN, 1985.
  • [18] W. D. Hoskins, P. J. Ponzo, Some properties of a class of band matrices, Mathematics of Computation 26 (1972), 393–400.
  • [19] J. L. Junkins, A. B. Younes, R. M. Woollands, X. Bai, Picard Iteration, Chebyshev Polynomials and Chebyshev-Picard Methods: Application in Astrodynamics, The Journal of the Astronautical Sciences 60 (2013), 623–653.
  • [20] S. N. Kersey, Dual basis functions in subspaces of inner product spaces, Applied Mathematics and Computation 219 (2013), 10012–10024.
  • [21] S. Lewanowicz, P. Woźny, Bézier representation of the constrained dual Bernstein polynomials, Applied Mathematics and Computation 218 (2011), 4580–4586.
  • [22] A. D. Polyanin, V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations, CRC Press, 1995.
  • [23] W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, third edition, Cambridge University Press, 2007.
  • [24] H. R. Tabrizidooz, K. Shabanpanah, Bernstein polynomial basis for numerical solution of boundary value problems, Numerical Algorithms (2017), https://doi.org/10.1007/s11075-017-0311-3.
  • [25] P. Woźny, Construction of dual bases, Journal of Computational and Applied Mathematics 245 (2013), 75–85.
  • [26] P. Woźny, Construction of dual B-spline functions, Journal of Computational and Applied Mathematics 260 (2014), 301–311.
  • [27] P. Woźny, P. Gospodarczyk, S. Lewanowicz, Efficient merging of multiple segments of Bézier curves, Applied Mathematics and Computation 268 (2015), 354–363.
  • [28] P. Woźny, S. Lewanowicz, Multi-degree reduction of Bézier curves with constraints, using dual Bernstein basis polynomials, Computer Aided Geometric Design 26 (2009), 566–579.
  • [29] K. Wright, Chebyshev Collocation Methods for Ordinary Differential Equations, The Computer Journal 6 (1964), 358–365.
  • [30] D. G. Zill, M. R. Cullen, Differential Equations with Boundary-Value Problems, seventh edition, Brooks/Cole, 2008.