Rolle’s theorem, and therefore, Lagrange and Taylor’s theorems are responsible for the inability to precisely determine the error estimate of numerical methods applied to partial differential equations. Basically, this comes from the existence of a non unique unknown point which appears in the remainder of Taylor’s expansion, as the heritage of Roll’s theorem.
This is the reason why, in the context of finite elements, people only consider asymptotic behavior of the error estimates which strongly depends on the interpolation error (see for example  or ).
Due to this lack of information, several heuristic approaches were considered, to investigate new possibilities, basically based on a probabilistic approach, which finally lead to the ability to classify numerical methods where the associated data are fixed and not asymptotic, (for a review, see-) .
However, given as an unavoidable fact that Taylor’s formula introduces an unknown point which leads to the inability to exactly know the interpolation error, and consequently the approximation error of a given numerical method, one can ask if the corresponding errors are bounded by quantities which are small as possible.
We mean here to focus our attention to the values of the numerical constants which appear in these estimations to minimize them as much as possible.
For example, if we consider the two dimensional case and the -Lagrange interpolation error of a given function which is defined on a given triangle.
Then, one can show  that the numerical constant which naturally appears in the corresponding interpolation error estimate is equal to , as the heritage of the remainder of the first order Taylor’s formula.
This is the reason why, in this paper, we propose a new first order Taylor-like formula where we will strongly modify the repartition of the numerical weights between the Taylor polynomial and the corresponding remainder.
To this end, we introduce a sequence of equally spaced points to consider a linear combination of the first derivative at these points, and we show that an optimal choice of the coefficients in this linear combination lead to minimize the corresponding remainder which becomes smaller that the classical one obtained by standard Taylor’s formula.
As a consequence, we show that the Lagrange interpolation error and also the quadrature error of the trapezoidal rule are two times smaller that the usual ones obtained by the standard Taylor’s formula.
The paper is organized as follows. In Section 2, we present the main result of this paper which treats on the new first order Taylor-like formula. In Section 2, we show the consequence we derived for the approximation error devoted to interpolation in Subsection 3.1 and to numerical quadratures in Subsection 3.2. Finally, in Section 4, concluding remarks follow.
2 The new first order Taylor-like theorem
To begin, let’s remind the well known first order Taylor’s formula .
Let , , and . Then, such that , and we have :
In order to derive the further main result below, let’s introduce the function defined by:
Then, we remark that : and . Moreover, the remainder in (1) satisfies the following result:
The function in the remainder (1) can be written as follows:
Proof : Indeed, the Taylor’s formula with Integral form of the remainder at the first order gives:
and with the substitution in the integral of (4), we obtain :
Let now be . We define by the formula below:
where the sequence of the real weights will be determined such that the corresponding remainder built on will be the smallest one.
In other words, we will prove the following result:
Let be a real mapping define on which belongs to , such that: .
Then we have the following first order expansion:
Moreover, this result is optimal since the associated weights, ( and for all .), guarantee that the remainder is minimum.
We also notice in Theorem 2.2 that we recognize in the parenthesis a Riemann sum, where, if tends to infinity, we obtain the fundamental theorem of integral calculus. That is to say :
In order to prove the theorem 2.2, we will need the following lemma:
Let be an any continuous function on , and a sequence of real numbers . So, we have the formula below :
Let’s set : and , where .
We will prove by induction on , that for all .
Indeed, if , we have :
So : .
Let’s now suppose that , and let’s show that : .
where we set :
Let’s now prove Theorem 2.2.
Proof : We have :
That is to say :
Let’s assume that :
So, (15) becomes:
So, (9) becomes:
which can be written by a simple substitution:
Then, (17) gives:
Moreover, we have:
Therefore, to derive a double inequality on , we cut the last integral in (19) as follows:
Then, considering the constant sign of on , and on , we have:
Since we also have the two following results:
where we defined the two polynomials and by:
Keeping in mind that we want to minimize , let us determine the value of such that the polynomial is minimum.
To this end, let us remark that is minimum when .
Then, for this value of , (26) becomes:
and finally, by summing on between to , we have:
and the corresponding weights are equal to:
which completes the proof of Theorem 2.2.
As an example, let us write formula (6) when , (that is to say with three points). In this case, we have:
3 Application to the approximation error
To give added value of Theorem 2.2 presented in the previous section, this paragraph is devoted to appreciate the resulting differences one can observe in two main applicative contexts which belong to numerical analysis. The first one concerns the Lagrange polynomial interpolation and the second one the numerical quadrature. In these two cases, we will evaluate the corresponding approximation error by the help of the standard first order Taylor’s formula, on the first hand, and by the generalized formula (6) derived in Theorem 2.2, on the other hand.
3.1 The interpolation error
where satisfies :
As a first application of formula (31)-(32), we will consider the particular case of the -Lagrange interpolation (see  or )
which consists to interpolate a given function on by a polynomial of degree less than or equal to one.
Then, the corresponding polynomial of interpolation is therefore given by:
and to compare it with the classical first order Taylor’s formula given by (1).
Now, standard result  regarding the Lagrange interpolation error claims that for any function which belongs to , we have:
This result is usually derived by considering the suitable function defined by:
So, given that: , and by applying two times Rolle’s theorem, it exists such that: .
Therefore, after some calculations, one obtains that:
Let be a function satisfying (37), then we have the following interpolation error estimate:
Then, (33) gives:
and due to (41), we get:
Let us now derive the corresponding result when one uses the new first order Taylor-like formula (31) in the expression of the interpolation polynomial defined by (33).
This is the purpose of the following lemma:
Let , then we have the following interpolation error estimate:
Proof : We begin to write and by the help of (31):
where satisfies (32) with obvious changes of notations. Namely, we have:
Then, by substituting and in the interpolation polynomial given by (33), we have:
Now, if we introduce the corrected interpolation polynomial defined by:
Equation (48) becomes:
which completes the proof of this lemma.
If one consider the corrected interpolation polynomial defined by (49), we gain for the error estimate (44) an accuracy which two times more precise that those we got in (38) by the classical Taylor’s formula.
Indeed, in order to compare (38) and (44) we notice that (44) leads to:
Now, the price to this upgrade is that is a polynomial of degree less than or equal to two which requires the computation of and . However, the consequent gain clearly appears in the following application devoted to finite elements.
Indeed, due to Céa’s lemma , the approximation error is bounded by the interpolation error. Then, if one wants to locally guarantee that the upper bound of the interpolation error will not be greater than a given threshold , then, if denotes the local mesh size defined by , with the classical -Lagrange interpolation and by the corresponding one with the corrected interpolation , we have:
Then, the difference of the maximum size between and is , and consequently, may be around percents greater that . This economy in terms of the total numbers of meshes would be even more significant if one considers the extension of this case in a three dimensional application.
We also notice that if we consider now the particular class of functions defined on , -periodic, then , and consequently, the interpolation error is equal to , and (44) becomes:
In other words, for this class of periodic functions, due to the new first order Taylor-like formula (31), the interpolation error given by (52) is bound by a quantity which is two times smaller that those we got in (38) from the classical Taylor’s formula.
We highlight that in this case, there is not anymore an over cost to get this more accurate result since it concerns the standard interpolation error associated to the standard Lagrange polynomial.
3.2 The quadrature error
We consider now, for any integrable function defined on , the famous trapezoidal quadrature  whose formula is given by:
The reason why we consider (53) is motivated by the fact that this quadrature formula corresponds to approximate the function by its Lagrange polynomial interpolation , of degree less than or equal to one, which is given by (33).
Thus, in the literature on numerical integration, (see for example  and ), the following estimation is well known as the trapezoid inequality :