A New First Order Taylor-like Theorem With An Optimized Reduced Remainder

12/28/2021
by   Joël Chaskalovic, et al.
0

This paper is devoted to a new first order Taylor-like formula where the corresponding remainder is strongly reduced in comparison with the usual one which which appears in the classical Taylor's formula. To derive this new formula, we introduce a linear combination of the first derivatives of the concerned function which are computed at n+1 equally spaced points between the two points where the function has to be evaluated. Therefore, we show that an optimal choice of the weights of the linear combination leads to minimize the corresponding remainder. Then, we analyze the Lagrange P_1- interpolation error estimate and also the trapezoidal quadrature error to assess the gain of accuracy we get due to this new Taylor-like formula.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

04/06/2020

On an optimal interpolation formula in K_2(P_2) space

The paper is devoted to the construction of an optimal interpolation for...
02/23/2021

Transfer function interpolation remainder formula of rational Krylov subspace methods

Rational Krylov subspace projection methods are one of successful method...
04/01/2021

The Gamma Function via Interpolation

The Lanczos formula for the Gamma function is used in many software libr...
11/07/2017

Splitting Proofs for Interpolation

We study interpolant extraction from local first-order refutations. We p...
05/19/2021

Numerical differentiation on scattered data through multivariate polynomial interpolation

We discuss a pointwise numerical differentiation formula on multivariate...
06/15/2019

A parametrized Poincare-Hopf Theorem and Clique Cardinalities of graphs

Given a locally injective real function g on the vertex set V of a finit...
11/04/2021

A Sound Up-to-n,δ Bisimilarity for PCTL

We tackle the problem of establishing the soundness of approximate bisim...

1 Introduction

Rolle’s theorem, and therefore, Lagrange and Taylor’s theorems are responsible for the inability to precisely determine the error estimate of numerical methods applied to partial differential equations. Basically, this comes from the existence of a non unique unknown point which appears in the remainder of Taylor’s expansion, as the heritage of Roll’s theorem.


This is the reason why, in the context of finite elements, people only consider asymptotic behavior of the error estimates which strongly depends on the interpolation error (see for example [5] or [6]).

Due to this lack of information, several heuristic approaches were considered, to investigate new possibilities, basically based on a probabilistic approach, which finally lead to the ability to classify numerical methods where the associated data are fixed and not asymptotic, (for a review, see

[6]-[10]) .
However, given as an unavoidable fact that Taylor’s formula introduces an unknown point which leads to the inability to exactly know the interpolation error, and consequently the approximation error of a given numerical method, one can ask if the corresponding errors are bounded by quantities which are small as possible.
We mean here to focus our attention to the values of the numerical constants which appear in these estimations to minimize them as much as possible.
For example, if we consider the two dimensional case and the -Lagrange interpolation error of a given function which is defined on a given triangle.
Then, one can show [1] that the numerical constant which naturally appears in the corresponding interpolation error estimate is equal to , as the heritage of the remainder of the first order Taylor’s formula.
This is the reason why, in this paper, we propose a new first order Taylor-like formula where we will strongly modify the repartition of the numerical weights between the Taylor polynomial and the corresponding remainder.
To this end, we introduce a sequence of equally spaced points to consider a linear combination of the first derivative at these points, and we show that an optimal choice of the coefficients in this linear combination lead to minimize the corresponding remainder which becomes smaller that the classical one obtained by standard Taylor’s formula.
As a consequence, we show that the Lagrange interpolation error and also the quadrature error of the trapezoidal rule are two times smaller that the usual ones obtained by the standard Taylor’s formula.
The paper is organized as follows. In Section 2, we present the main result of this paper which treats on the new first order Taylor-like formula. In Section 2, we show the consequence we derived for the approximation error devoted to interpolation in Subsection 3.1 and to numerical quadratures in Subsection 3.2. Finally, in Section 4, concluding remarks follow.

2 The new first order Taylor-like theorem

To begin, let’s remind the well known first order Taylor’s formula [15].
Let , , and . Then, such that , and we have :

(1)

where :

and:

(2)

In order to derive the further main result below, let’s introduce the function defined by:

Then, we remark that : and . Moreover, the remainder in (1) satisfies the following result:

Proposition 2.1

The function in the remainder (1) can be written as follows:

(3)

Proof : Indeed, the Taylor’s formula with Integral form of the remainder at the first order gives:

(4)

and with the substitution in the integral of (4), we obtain :

where :

Finally,

 

Let now be . We define by the formula below:

(5)

where the sequence of the real weights will be determined such that the corresponding remainder built on will be the smallest one.
In other words, we will prove the following result:

Theorem 2.2

Let be a real mapping define on which belongs to , such that: .
Then we have the following first order expansion:

(6)

where :

(7)

Moreover, this result is optimal since the associated weights, ( and for all .), guarantee that the remainder is minimum.

Remark 1

To compare the control of given by (7) and those of given by (2), we remark that (7) implies that:

(8)

Consequently, the remainder is smaller that .

Remark 2

We also notice in Theorem 2.2 that we recognize in the parenthesis a Riemann sum, where, if tends to infinity, we obtain the fundamental theorem of integral calculus. That is to say :

In order to prove the theorem 2.2, we will need the following lemma:

Lemma 2.3

Let be an any continuous function on , and a sequence of real numbers . So, we have the formula below :

(9)

where :

Proof : Let’s set : and , where .
We will prove by induction on , that for all .
Indeed, if , we have :

and :

So : .
Let’s now suppose that , and let’s show that : .
We have:

(10)
(11)
(12)
(13)

Conclusion :

where we set :

(14)

 

Let’s now prove Theorem 2.2.
Proof : We have :

That is to say :

(15)

Let’s assume that :

(16)

So, (15) becomes:

(17)

Let’s now use Lemma 2.3 in (17) by setting in (9):

So, (9) becomes:

which can be written by a simple substitution:

(18)

Then, (17) gives:

(19)

Moreover, we have:

Therefore, to derive a double inequality on , we cut the last integral in (19) as follows:

(20)

Then, considering the constant sign of on , and on , we have:

(21)

and,

(22)

Thus, (21) and (22) enable us to get the next two inequalities:

(23)

and,

(24)

Since we also have the two following results:

(25)

where we set: , inequalities (23) and (24) lead to:

(26)

where we defined the two polynomials and by:

(27)

Keeping in mind that we want to minimize , let us determine the value of such that the polynomial is minimum.
To this end, let us remark that is minimum when .
Then, for this value of , (26) becomes:

(28)

and finally, by summing on between to , we have:

(29)

We also get, due to definition (14) of on the one hand, and because the weights satisfy (16), on the other hand, we have:

and the corresponding weights are equal to:

which completes the proof of Theorem 2.2.  

As an example, let us write formula (6) when , (that is to say with three points). In this case, we have:

(30)

where :

3 Application to the approximation error

To give added value of Theorem 2.2 presented in the previous section, this paragraph is devoted to appreciate the resulting differences one can observe in two main applicative contexts which belong to numerical analysis. The first one concerns the Lagrange polynomial interpolation and the second one the numerical quadrature. In these two cases, we will evaluate the corresponding approximation error by the help of the standard first order Taylor’s formula, on the first hand, and by the generalized formula (6) derived in Theorem 2.2, on the other hand.

3.1 The interpolation error

In this section we consider the first application of the generalized Taylor-like expansion (6) when . In this case, for any function which belongs to , formula (6) can be written :

(31)

where satisfies :

(32)

As a first application of formula (31)-(32), we will consider the particular case of the -Lagrange interpolation (see [11] or [14]) which consists to interpolate a given function on by a polynomial of degree less than or equal to one.
Then, the corresponding polynomial of interpolation is therefore given by:

(33)

One can remark that by the help of (33), we have: and .
Our purpose now is to investigate the consequences of formula (31) when one uses it to evaluate the error of interpolation defined by

and to compare it with the classical first order Taylor’s formula given by (1).
Now, standard result [12] regarding the Lagrange interpolation error claims that for any function which belongs to , we have:

(34)

This result is usually derived by considering the suitable function defined by:

(35)

So, given that: , and by applying two times Rolle’s theorem, it exists such that: .
Therefore, after some calculations, one obtains that:

(36)

and (34) simply follows.
However, to appreciate the difference between the classical Taylor’s formula and the new one in (31), let us introduce and such that:

(37)

Then, we will now reformulate estimate (34) by using the classical Taylor formula (1). This the purpose of the following lemma:

Lemma 3.1

Let be a function satisfying (37), then we have the following interpolation error estimate:

(38)

where: .

Proof : We begin to write the Lagrange polynomial given by (33) by the help of classical first order Taylor’s formula (1).
Indeed, in (33), we substitute and by: , we have:

(39)
(40)

where, by the help of (2) and (37), and satisfy:

(41)

Then, (33) gives:

(42)

and due to (41), we get:

(43)

where we used that: .
Finally, since , (43) leads to (38).  

Let us now derive the corresponding result when one uses the new first order Taylor-like formula (31) in the expression of the interpolation polynomial defined by (33).
This is the purpose of the following lemma:

Lemma 3.2

Let , then we have the following interpolation error estimate:

(44)

Proof : We begin to write and by the help of (31):

(45)
(46)

where satisfies (32) with obvious changes of notations. Namely, we have:

(47)

Then, by substituting and in the interpolation polynomial given by (33), we have:

(48)

Now, if we introduce the corrected interpolation polynomial defined by:

(49)

Equation (48) becomes:

(50)

Thus, due to (47), we have: , and (48) by the help of (49) gives:

(51)

which completes the proof of this lemma.  

Let us now formulate a couple of consequences of Lemma 3.1 and Lemma 3.2:

  1. If one consider the corrected interpolation polynomial defined by (49), we gain for the error estimate (44) an accuracy which two times more precise that those we got in (38) by the classical Taylor’s formula.
    Indeed, in order to compare (38) and (44) we notice that (44) leads to:

    Now, the price to this upgrade is that is a polynomial of degree less than or equal to two which requires the computation of and . However, the consequent gain clearly appears in the following application devoted to finite elements.
    Indeed, due to Céa’s lemma [5], the approximation error is bounded by the interpolation error. Then, if one wants to locally guarantee that the upper bound of the interpolation error will not be greater than a given threshold , then, if denotes the local mesh size defined by , with the classical -Lagrange interpolation and by the corresponding one with the corrected interpolation , we have:

    Then, the difference of the maximum size between and is , and consequently, may be around percents greater that . This economy in terms of the total numbers of meshes would be even more significant if one considers the extension of this case in a three dimensional application.

  2. We also notice that if we consider now the particular class of functions defined on , -periodic, then , and consequently, the interpolation error is equal to , and (44) becomes:

    (52)

    In other words, for this class of periodic functions, due to the new first order Taylor-like formula (31), the interpolation error given by (52) is bound by a quantity which is two times smaller that those we got in (38) from the classical Taylor’s formula.
    We highlight that in this case, there is not anymore an over cost to get this more accurate result since it concerns the standard interpolation error associated to the standard Lagrange polynomial.

3.2 The quadrature error

We consider now, for any integrable function defined on , the famous trapezoidal quadrature [12] whose formula is given by:

(53)

The reason why we consider (53) is motivated by the fact that this quadrature formula corresponds to approximate the function by its Lagrange polynomial interpolation , of degree less than or equal to one, which is given by (33).
Thus, in the literature on numerical integration, (see for example [12] and [4]), the following estimation is well known as the trapezoid inequality :