On the optimal rates of convergence of Gegenbauer projections

In this paper we present a comprehensive convergence rate analysis of Gegenbauer projections. We show that, for analytic functions, the convergence rate of the Gegenbauer projection of degree n is the same as that of the best approximation of the same degree when λ≤0 and the former is slower than the latter by a factor of n^λ when λ>0, where λ is the parameter in Gegenbauer polynomials. For piecewise analytic functions, we demonstrate that the convergence rate of the Gegenbauer projection of degree n is the same as that of the best approximation of the same degree when λ≤1 and the former is slower than the latter by a factor of n^λ-1 when λ>1. The extension to functions of fractional smoothness is also discussed. Our theoretical findings are illustrated by numerical experiments.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

01/07/2020

How fast does the best polynomial approximation converge than Legendre projection?

We compare the convergence behavior of best polynomial approximations an...
07/01/2018

Exponential Convergence of the Deep Neural Network Approximation for Analytic Functions

We prove that for analytic functions in low dimension, the convergence r...
06/02/2020

Convergence rates of spectral orthogonal projection approximation for functions of algebraic and logarithmatic regularities

Based on the Hilb type formula between Jacobi polynomials and Bessel fun...
10/19/2021

Faster Rates for the Frank-Wolfe Algorithm Using Jacobi Polynomials

The Frank Wolfe algorithm (FW) is a popular projection-free alternative ...
06/07/2021

Are best approximations really better than Chebyshev?

Best and Chebyshev approximations play an important role in approximatio...
02/28/2022

On the quadrature exactness in hyperinterpolation

This paper investigates the role of quadrature exactness in the approxim...
01/05/2021

On the convergence rate of the Kačanov scheme for shear-thinning fluids

We explore the convergence rate of the Kačanov iteration scheme for diff...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Orthogonal polynomial approximations, such as Gegenbauer as well as Legendre and Chebyshev approximations, play an important role in many branches of numerical analysis, including function approximations and quadrature [6, 27], the resolution of Gibbs phenomenon [8, 9] and spectral methods for the numerical solution of differential equations [11, 12, 13, 20, 24]. One of the most attractive features of them is that their convergence rate depends strongly on the regularity of the underlying function and can give highly accurate approximations for smooth functions. Due to the important role that orthogonal polynomial approximations play in many fields of applications, their convergence analysis has attracted considerable interest, especially in the spectral methods community.

Let be a positive Borel measure on the interval

, for which all moments of

are finite. We introduce the inner product and let be a set of orthogonal polynomials with respect to . Then, for any , it can be expanded in terms of as

(1.1)

Let denote the truncation of the above series after terms, i.e., , it is well known that is the orthogonal projection of onto the space

. Existing approaches for error estimates of

in the maximum norm can be roughly categorized into two types: (i) applying the classical inequality , where is the Lebesgue constant of and is the best approximation of degree to . Hence, this approach transforms the error estimate of to the problem of finding the asymptotic behavior of the corresponding Lebesgue constant; (ii) using the inequality , and the remaining task is to find some sharp estimates of the coefficients . The former approach plays a key role in the convergence analysis for polynomial projections and nowadays the asymptotic behavior of the Lebesgue constants associated with classical orthogonal projections has been well-understood (see, e.g., [5, 14, 25]). However, it is difficult to establish computable error bounds for with this approach. Moreover, to the best of the author’s knowledge, the sharpness of the derived error estimates has not been addressed. For the latter approach, a remarkable advantage is that some computable error bounds of can be established (see, e.g., [1, 16, 17, 27, 28, 30, 31, 32, 33, 35]). However, as shown in [30, 31], the convergence rate predicted by this approach may be overestimated for differentiable functions.

In this work, we investigate optimal rates of convergence of Gegenbauer projections in the maximum norm, i.e., , where and . In order to exhibit the dependence on the parameter , we denote by the Gegenbauer projection of degree . From the preceding discussion we have

(1.2)

where is the Lebesgue constant of Gegenbauer projections. For it is known (see, e.g., [5, 14, 15]) that

(1.3)

Regarding (1.2) and (1.3), it is natural to ask: When using (1.2) to predict the convergence rate of , how sharp the result is? If the predicted rate is not sharp, what is the optimal rate? It is easily seen that the predicted rate of convergence of is optimal when since it is the same as that of , and is near-optimal when since grows very slowly. When , we can deduce that the rate of convergence of is slower than that of by at most a factor of . More recently, the particular case of , which corresponds to the case of Legendre projections, was examined in [31]. It is shown that the predicted rate of convergence by (1.2) is sharp when the underlying function is analytic, but is optimistic for piecewise analytic functions. For the latter, it has been shown that the convergence rate of is actually the same as than that of . This finding provides new insight into the approximation power of Legendre projections and inspire us to explore the case of Gegenbauer projections.

The aim of this paper is to present a comprehensive convergence rate analysis of Gegenbauer projections and clarify the role of the parameter . Our main contributions can be summarized as follows:

  • For analytic functions, we show that the inequality (1.2) is sharp in the sense that the optimal rate of convergence of is slower than that of by a factor of when . When , the rate of convergence of is the same as that of ;

  • For piecewise analytic functions, we show that the optimal rate of convergence of is the same as that of when . When , however, we prove that the optimal rate of convergence of is slower than that of by a factor of . Comparing this finding with the predicted results by (1.2) and (1.3), we see that the convergence rate of the Gegenabuer projection is better than the predicted result by a factor of when and by a factor of when ;

  • We extend our discussion to functions of fractional smoothness, including functions with endpoint singularities and functions with an interior singularity of fractional order. The optimal rate of convergence of for some model functions is also analyzed.

The remainder of the paper is organized as follows. In the next section, we present some preliminary results on Gegenbauer polynomials and gamma functions. In section 3, we first carry out numerical experiments on the convergence rates of and and then give some observations. In section 4 we analyze the convergence behavior of for analytic functions. We first establish some explicit bounds for the Gegenbauer coefficients and then applied them to establish the optimal rate of convergence of . In section 5 we establish optimal rates of convergence of for piecewise analytic functions with the help of some refined estimates of the Dirichlet kernel of Gegenbauer polynomials. We extend our discussion to functions of fractional smoothness in section 6 and give some concluding remarks in section 7.

2 Preliminaries

In this section, we introduce some basic properties of Gegenbauer polynomials and the gamma function that will be used throughout the paper. All these properties can be found in [18, 25].

2.1 Gamma function

For , the gamma function is defined by

(2.1)

When , is defined by analytic continuation. The gamma function satisfies the recursive property , and the classical reflection formula

(2.2)

Moreover, the duplication formula of the gamma function reads

(2.3)

The ratio of two gamma functions will be crucial for the derivation of explicit bounds for the Gegenbauer coefficients and the asymptotic behavior of the Dirichlet kernel of Gegenbauer projections. Let be some real or complex and bounded constants, then we have

(2.4)

In the special case of either or , the following simple and sharp bounds will be useful.

Lemma 2.1.

Let and . Then

(2.5)

and

(2.6)

These upper bounds are sharp in the sense that they can be attained when or .

Proof.

We only prove (2.5) and the proof of (2.6) is completely analogous. In the cases and , (2.5) is trivial. Now consider the cases and and . To this end, we introduce the following sequence

In view of the recursive property of , we obtain

By differentiating the right-hand side of the above equation with respect to , one can easily check that the sequence is strictly increasing when and is strictly decreasing when or . Since , we deduce that is strictly decreasing when and is strictly increasing when or . Hence, if , we have

and the upper bound can be attained when . If or , then

and the upper bound can be attained when . This proves (2.5) and the proof of Lemma 2.1 is complete. ∎

2.2 Gegenbauer polynomials

Let be an integer, the Gegenbauer (or ultraspherical) polynomial of degree is defined by

(2.7)

where is the Gauss hypergeometric function defined by

and where denotes the Pochhammer symbol defined by for and . The sequence of Gegenbauer polynomials forms a system of polynomials orthogonal over the interval with respect to the weight function and

(2.8)

where is the Kronnecker delta and

Since is even, it follows that satisfy the following symmetry relation

(2.9)

which implies that is an even function for even

and an odd function for odd

. For , Gegenbauer polynomials satisfy the following inequality

(2.10)

and for ,

(2.11)

where is a positive constant independent of . The Rodrigues formula of Gegenbauer polynomials reads

(2.12)

which plays an important role in asymptotic analysis of the Gegenbauer coefficients.

Gegebauer polynomials include some important polynomials such as Legendre and Chebyshev polynomials as special cases. More specifically, we have

(2.13)

where is the Legendre polynomial of degree and is the Chebyshev polynomial of the second kind of degree . When , the Gegenbauer polynomials reduce to the Chebyshev polynomials of the first kind by the following definition

(2.14)

where is the Chebyshev polynomial of the first kind of degree .

3 Experimental observations

In this section we present some experimental observations on the convergence behavior of Gegenbauer projections. First, from the orthogonality (2.8) we have

(3.1)

In order to quantify the difference between the rates of convergence of and , we introduce the quantity

(3.2)

Throughout the rest of the paper, we may use instead of when computing at the point . In addition, we compute using the barycentric-Remez algorithm in [21] and its implementation is available in Chebfun with the minimax command (see [7]).

3.1 Analytic functions

We consider the following three test functions

(3.3)

It is clear that the first function is analytic in the whole complex plane and the last two functions are analytic in a neighborhood of the interval . We divide the choice of the parameter into two ranges: and .

Figure 1: Top row shows the log plot of the maximum errors of () and with () and (), for (left), (middle) and (right). Bottom row shows the plot of the corresponding for () and ().

Figure 1 illustrates the maximum errors of and for and and the quantity as a function of . From the top row of Figure 1, we see that the accuracy of is indistinguishable with that of . From the bottom row of Figure 1, we see that these two tend, respectively, to some finite constants as grows, and thus the rate of convergence of is the same as that of . Figure 2 illustrates the maximum errors of and for and and as a function of . From the top row of Figure 2, we see clearly that the rate of convergence of is faster than that of . From the bottom row of Figure 2, we see that these two tend, respectively, to some finite constants as grows, which implies that the convergence rate of is slower than that of by a factor of .

Figure 2: Top row shows the log plot of the maximum errors of () and with () and (), for (left), (middle) and (right). Bottom row shows the log plot of the corresponding for () and ().

In summary, the above observations suggest the following conclusions:

  • For , the rate of convergence of is the same as that of ;

  • For , however, the rate of convergence of is slower than that of by a factor of .

3.2 Differentiable functions

We consider the following test functions

(3.4)

where is the truncated power function defined by

(3.5)

It is clear that these three functions are all piecewise analytic functions, whose definition will be given in section 5. In our numerical tests, we divide the choice of the parameter into ranges: and .

Figure 3: Top row shows the log-log plot of the maximum errors of (), with () and (), for (left), (middle) and (right). Bottom row shows the plot of the corresponding for () and ().

Figure 3 illustrates the maximum errors of and for and and the quantity as a function of . From the top row of Figure 3, we see that the accuracy of is slightly worse than that of . From the bottom row of Figure 3, we see that these two tend to or oscillate around some finite constants as grows, which implies that the rate of convergence of is the same as that of . Figure 4 illustrates the maximum errors of and for and and as a function of . From the top row of Figure 4, we see that the rate of convergence of is significantly slower than that of . From the bottom row of Figure 4, we see that these two tend to or oscillate around some finite constants as grows, which implies that the rate of convergence of is slower than that of by a factor of .

Figure 4: Top row shows the log-log plot of the maximum errors of (), with () and (), for (left), (middle) and (right). Bottom row shows the log plot of the corresponding for () and ().

In summary, the above observations suggest the following conclusions:

  • For , the rate of convergence of is the same as that of ;

  • For , however, the rate of convergence of is slower than that of by a factor of , which is one power of smaller than the predicted result using (1.2) and (1.3).

We remark that the convergence results of the particular case (that corresponds to Chebyshev projections) has been included in the above two observations. We refer to [16, 27] for more details on the convergence rate analysis of Chebyshev projections and to [31] for a comparison of Chebyshev projections with . Hereafter, we will omit discussion of this case.

4 Optimal rate of convergence of Gegenbauer projections for analytic functions

In this section we study the optimal rate of convergence of Gegenbauer projections for analytic functions. Let denote the Bernstein ellipse

(4.1)

and it has foci at and the major and minor semi-axes are given by and , respectively.

The starting point of our analysis is the contour integral expression of the Gegenbauer coefficients.

Lemma 4.1.

Suppose that is analytic in the region bounded by the ellipse for some , then for each and and ,

(4.2)

where the sign in is chosen so that and the constant is defined by

(4.3)
Proof.

With a different normalization condition on , (4.2) was first derived by Cantero and Iserles in [3] for developing some fast algorithms. The idea is to express as a linear combination of and then as an integral transform with a Gauss hypergeometric function as its kernel. Due to the slow convergence of the Taylor series of the kernel, a hypergeometric transformation was used to replace the original kernel with a new one that converges much more rapidly, which gives (4.2). An alternative and simpler approach for the derivation of (4.2) was proposed in [29] and the idea is to rearrange the Chebyshev coefficients of the second kind. We refer the interested readers to [3, 29] for more details. ∎

We now state some new bounds on the Gegenauer coefficients for all and . Compared to [29, Thm. 4.3], our new bounds are more concise for and are new for .

Theorem 4.2.

Under the assumptions of Lemma 4.1. Then, for ,

(4.4)

where is defined by

(4.5)

and and is the length of the circumference of .

Proof.

We follow the idea of the proof in [29]. From Lemma 4.1 and [29, Theorem 4.1] we have that

(4.6)

The remaining task is to bound the constant and these hypergeometric functions on the right hand side of (4.6). For the former, it is easy to see that when . For , using Lemma 2.1 we obtain

(4.7)

We now aim to bound these hypergeometric functions on the right hand side of (4.6). For and , using the Euler integral representation of the Gauss hypergeometric function [18, Equation (15.6.1)], we obtain

(4.8)

When , it is easy to verify that

It follows that

(4.9)

Combining (4.6), (4) and (4), the desired bounds follow immediately. ∎

With the above preparation, we now derive error estimates of Gegenbauer projections in the maximum norm. Compared to the results in [29, Thm. 4.8], our new results are more concise and informative. Throughout the paper, denotes the integer part of .

Theorem 4.3.

Suppose that is analytic in the region bounded by the ellipse for some .

  1. If , then for we have

    (4.10)

    where the quantity is defined by

    and is a generic positive constant.

  2. If , we have

    (4.11)

    where is the positive constant defined in (2.11).

Up to constant factors, these bounds on the right hand side are optimal in the sense that they can not be improved in any negative powers of further.

Proof.

For (i), by Lemma 2.1 we have

Combining these bounds together with Theorem 4.2 gives

For the sum inside the bracket, note that is strictly decreasing with respect to for , we obtain that

(4.12)

where is the incomplete gamma function (see, e.g., [18, p.174]). Finally, we note that , where the constant , the desired result (4.10) follows. The proof of (ii) is similar and we omit the details.

We now turn to prove the optimality of (4.10) and (4.11). Here we only prove the former since the latter can be proved by a similar argument. Suppose by contradiction that there exist constants independent of such that

(4.13)

We consider the function with . It is easily seen that this function has a simple pole at and therefore , where may be taken arbitrary small. Using Lemma 4.1 and the residue theorem, we can write the Gegenbauer coefficients of as

(4.14)

Clearly, we see that for all . Moreover, by considering the ratio , it is not difficult to verify that the sequence is strictly increasing. We now consider the error of at the point . In view of (2.10), we obtain that

Combining this with (4.13) we deduce that

(4.15)

By using (4), (2.4) and (4.14), we obtain that . On the other hand, we know that . This leads to a contradiction since the upper bound may be smaller than the lower bound when is sufficiently small. Therefore, we can conclude that the derived bound (4.10) is optimal and can not be improved in any negative powers of . This completes the proof. ∎

Remark 4.4.

From [4, p. 131] we know that . Comparing this with (4.10) and (4.11), it is easily seen that the rate of convergence of is slower than that of by a factor of for and is the same as that of for . This explains the convergence behavior of illustrated in Figures 1 and 2.

Remark 4.5.

Polynomial interpolation in the zeros of Gegenbauer polynomials is also a powerful approach for analytic functions. When the interpolation nodes are the zeros of

, it has been shown in [34, Thm. 4.1] that the rate of convergence of Gegenbauer interpolation in the maximum norm is for and is if . Comparing this with Theorem 4.3, we see that Gegenbauer interpolation and projection of the same degree enjoy the same convergence rate.

5 Optimal rate of convergence of Gegenbauer projections for piecewise analytic functions

In this section we study optimal rate of convergence of Gegenbauer projections for piecewise analytic functions. Throughout this section, we denote by a generic positive constant independent of which may take different values at different places.

We first introduce the definition of piecewise analytic functions.

Definition 5.1.

Suppose that where . Let and and assume that is a set of distinct points such that is analytic on each of the subinterval , but itself is not analytic at these points . We then say that a piecewise analytic function on .

We now consider the convergence rate analysis of Gegenbauer projections. First of all, using the integral expression of Gegenbauer coefficients, we can rewrite the Gegenbauer projection as

(5.1)

where is the Dirichlet kernel of Gegenbauer projection defined by

(5.2)

and the last equation follows from the Christoffel-Darboux formula of Gegenbauer polynomials.

The following refined estimates for the Dirichlet kernel will be useful.

Lemma 5.2.

Let . Then, for and large ,

  1. If , it holds that .

  2. If with , it holds that