1 Introduction
This paper is concerned with a sub-optimality of Gauss–Hermite quadrature and an optimality of the trapezoidal rule.
Given a function , Gauss–Hermite quadrature is arguably one of the standard numerical integration methods to compute the integral
It is a Gauss-type quadrature, i.e., the quadrature points are the zeros of the degree- orthogonal polynomial associated with the weight function, and the corresponding quadrature weights are readily defined. With the weight function , the orthogonal polynomial we have is the (so-called probabilist’s) Hermite polynomial.
Gauss–Hermite quadrature is widely used; here, we just mention spectral methods [Guo.BY_1999_Hermite, mao2017hermite, Canuto.C_etal_2006_book], and applications in aerospace engineering [braun2021satellite, BBM2020book], finance [Brandimarte2006book, FR2008book, GH2010], and physics [SS2016book, Gezerlis2020book]. Nevertheless, the limitation of this method seems to be less known.
We start with numerical results that illustrate this deficiency. Figure 1 shows a comparison of Gauss–Hermite rule and a suitably truncated trapezoidal rule on . Here the target function is , and the integral is with respect to the standard Gaussian measure. For the trapezoidal rule, we integrate , upon a suitable cut-off of the domain. We discuss the setting of this experiment in more detail at the end of Section 4.

What we observe in Figure 1 is that while the trapezoidal rule achieves , Gauss–Hermite quadrature achieves only a slower convergence rate , where is the number of quadrature points. A similar empirical inefficiency of Gauss–Hermite quadrature is reported in a paper by one of the present authors and Nuyens [NS2021], also in a recent paper by Trefethen [T2021], who pointed out the problems of the quadrature weights decaying too fast relative to the widely-spread quadrature points.
In this paper, we prove a sub-optimality of Gauss–Hermite quadrature. More precisely, we establish a sharp lower bound for the error decay of Gauss–Hermite quadrature in the sense of worst case error. Moreover, we show that a suitably truncated trapezoidal rule achieves the optimal rate of convergence, up to a logarithmic factor.
Integrands of our interest are functions with finite smoothness. In this regard, we will work under the assumption that the integrand function lives in the weighted -Sobolev space with smoothness . This function space is widely used; see for example books [B1998, Canuto.C_etal_2006_book, Shen.J_2011_book] and references therein. For , this space is equivalent to the so-called Hermite space of finite smoothness, which has been attracting attention in high-dimensional computations; see [IG2015_Hermite, DILP2018, gnewuch2021countable] for its use in high-dimensional computations, and see [B1998, 1.5.4. Proposition] together with Section 2 below for the equivalence.
In this setting, we prove that the worst case integration error of Gauss–Hermite quadrature is bounded from below by up to a constant. This rate matches the upper bound shown by Mastroianni and Monegato [MM1994], and thus cannot be improved. Moreover, this rate provides a rigorous verification of the numerical findings made in [DILP2018, Section 4], where the authors computed approximate values of the worst-case error for Gauss–Hermite quadrature in the Hermite space of finite smoothness, and observed the rate for and .
It turns out that this rate is merely half of the best possible: if we allow quadrature-points and weights to be arbitrary, then the best achievable using (linear) quadratures is of the rate ; see [DILP2018, Theorem 1] for a precise statement. Dick et al. [DILP2018] also show that a class of numerical integration methods based on so-called (scaled) higher order digital nets achieve the optimal rate up to a logarithmic factor in the multi-dimensional setting, including the one-dimensional setting as a special case.
Our results on the trapezoidal rule show that the trapezoidal rule, a method arguably significantly simpler than one-dimensional higher order digital nets, also achieves this optimal rate, up to a logarithmic factor, and thus nearly twice as fast as the error-decay rate of Gauss–Hermite rule. It is also worth mentioning that Gauss–Hermite quadrature requires a nontrivial algorithm to generate quadrature points, whereas for the trapezoidal rule we simply have equispaced points.
For analytic functions, efficiency of the trapezoidal rule is well known; related studies date back at least to the paper by Goodwin in 1949 [Goodwin1949], and this accuracy is not only widely known to, but also still actively studied by contemporary numerical analysts [S1997, MS2001, GilEtAl2007, Waldvogel2011, TW2014, T2021]. Our results show that this efficiency extends to Sobolev class functions, where we do not have tools from complex analysis such as contour integrals. Our proof uses the strategy recently developed by one of the present authors and Nuyens [NS2021] for a class of quasi-Monte Carlo methods.
We now mention other error estimates of Gauss–Hermite quadrature in the literature. Based on results by Freud
[Freud1972], Smith, Sloan, and Opie [SSO1983] showed an upper bound of the rate for -times continuously differentiable functions whose -th derivative satisfies a suitable growth condition for . Since the weighted Sobolev space seems to be more frequently used, our focus is on this class. Della Vecchia and Mastroianni [DM2003] showed an upper bound of the rate for -integrable a.e. differentiable functions whose derivative is also -integrable. Moreover, their result implies a matching lower bound in the sense of worst-case error, and thus in this sense their upper bound is sharp. It does not seem to be trivial if their bounds generalise, for example to the order or to , with the -times differentiability of the integrand for . In contrast, our results show that, by assuming the square -integrability, the rate improves to , for general .Before moving on, we mention that our results motivate further studies of Gauss–Hermite based algorithms in high-dimensional problems. Integration problems in high-dimension arise, for example in computing statistics of solutions of partial differential equations parametrised by random variables. In particular, integration with respect to the Gaussian measure has been attracting increasing attention; see for example
[Graham.I_eta_2015_Numerische, C2018, YK2019, HS2019ML, D2021, dung2022analyticity]. The measure being Gaussian, algorithms that use Gauss–Hermite quadrature as a building block have gained popularity [C2018, D2021, dung2022analyticity]. The key to proving error estimates in this context is the regularity of the quantity of interest with respect to the parameter, and for elliptic and parabolic problems such smoothness, even an analytic regularity, has been shown [BNT2010SIREV, NT2009parabolic, dung2022analyticity]. In contrast, the solutions of parametric hyperbolic systems suffer from limited regularity under mild assumptions [MNT2013_StochasticCollocationMethod, MNT2015_AnalysisComputationElasticWave]. Hence, our results, which show a suboptimality of Gauss–Hermite rule for functions with finite smoothness, caution us and encourage further studies of the use of algorithms based on Gauss–Hermite rule for this class of problems.Finally, we note that if we have the weight function instead of , the corresponding orthogonal polynomials are called physicist’s Hermite polynomials. Our results for Gauss–Hermite quadrature can be obtained for these polynomials by simply rescaling our results by . Likewise, results for physicist’s Hermite polynomials in the literature, e.g. in [Szego1975], are used throughout this paper.
The rest of this paper is organized as follows. In Section 2 we introduce necessary definitions such as Hermite polynomials, the weighted Sobolev spaces, and the Hermite spaces. We also discuss the norm equivalence between the weighted Sobolev space and the Hermite space. In Section 3 the sub-optimality of Gauss–Hermite quadrature is shown. In particular, we obtain the matching lower and upper bounds for the worst-case error. In Section 4 the optimality of the trapezoidal rule is shown. Finally, Section 5 concludes this paper.
2 Function spaces with finite smoothness
Throughout this paper, we use the weighted space , the normed space consisting of the equivalence classes of Lebesgue measurable functions satisfying , where the equivalence relation is given by if and only if .
2.1 Hermite polynomials
For , the -th degree probabilist’s Hermite polynomial is given by
(1) |
where they are normalised so that for all . The polynomials form a complete orthonormal system for .
The following properties are used throughout the paper.
(2) |
(3) |
2.2 Weighted Sobolev space
The function space we consider is the Sobolev space of square integrable functions, the integrability condition of which is imposed by the Gaussian measure.
Definition (Weighted Sobolev space)
For , the weighted Sobolev space (with the weight function ) is the class of all functions such that has weak derivatives satisfying for :
Elements in for are in the standard local Sobolev space , and thus admit a continuous representative. In what follows, we always take the continuous representative of .
We recall another important class of functions, the so-called Hermite space. For this space we follow the definition of [DILP2018].
Definition (Hermite space with finite smoothness)
For , the Hermite space with finite smoothness is given by
where , and
It turns out , where the equality here means the norm equivalence. Hence, results established for the Hermite space can be readily translated to up to a constant, which allows us to compare the results on higher order digital nets in [DILP2018] with ours.
A proof of this equivalence of the norm is outlined in [B1998, 1.5.4. Proposition]; see also [DILP2018, Lemma 6]. Here, for completeness we give a proof.
Lemma 1
Let with , then with the same smoothness parameter .
Proof
We first prove the claim for . Assume . Let and for . We have
for any function in the space of compactly supported infinitely differentiable functions . Since is in the standard Sobolev space , there exists a sequence that satisfies
Then, letting , the Cauchy–Schwarz inequality implies , and for we also have
Therefore both and define a continuous functional on . Hence, we have and thus
which is equivalent to
(4) | ||||
where we used . Hence we obtain
and
This implies since . For general , assuming and by repeating the same argument as above, we have for and thus
Hence, observing (see also [DILP2018, p. 687]), we conclude .
3 Matching bounds for Gauss–Hermite quadrature
In this section, we prove the sub-optimality of Gauss–Hermite quadrature. We first introduce the following linear quadrature of general form
(5) |
with arbitrary distinct quadrature points on the real line
and quadrature weights . Gauss–Hermite quadrature is given by the points being the roots of and the weights being , ; see for example [Shen.J_2011_book, Theorem 3.5].
Given a quadrature rule , it is convenient to introduce the notation
The quantity is commonly referred to as the worst-case error of in ; see for example [Dick.J_Kuo_Sloan_2013_ActaNumerica]. Now we can state our aim of this section more precisely: we prove the matching lower and upper bounds on for Gauss–Hermite quadrature.
3.1 Lower bound
We first derive the following lower bound on for the general quadrature (5).
Lemma 2
Let . For , let
(6) |
Then, there exists a constant , which depends only on , such that the worst-case error of a general function-value based linear quadrature (5) in the weighted Sobolev space is bounded below by
where is equal to 1 if and 0 otherwise.
Proof
The heart of the matter is to construct a function such that for all , resulting in , and that is small but is large. Define a function by
Then, turns out to fulfill our purpose. This type of fooling function used to prove lower bounds for the worst-case error is called bump function (of finite smoothness), in quasi-Monte Carlo theory; see, for instance, [DHP2015, Section 2.7].
First we show . It follows from
that we have
for and any . As we have for , the function is -times continuously differentiable. Moreover, is continuous piecewise polynomial and thus weakly differentiable. Also, noting that only when and otherwise , we have
for . The last sum over and does not depend on . Denoting this sum by , we obtain
This proves .
By definition of , we have for all , and thus
Moreover, we have
Using the above results, we obtain
Now the proof is complete.
Using the general lower bound in Lemma 2, we obtain the following lower bound on the worst-case error for Gauss–Hermite quadrature.
Theorem 1
Let . For any , the worst-case error of the Gauss–Hermite quadrature in the weighted Sobolev space is bounded from below as
with a constant that depends on but independent of .
Proof
Let , , be the roots of . For any , it holds that
(7) |
see, for instance, [Szego1975, Eq. (6.31.22)]. Thus, to invoke Lemma 2 we let
so that the conditions in (6) are satisfied. Also, each node is bounded below and above as follows; see, for instance, [Szego1975, Eq. (6.31.19)]: for odd, we have with the positive zeros satisfying
(8) |
and for even,
(9) |
with symmetricity , for odd and even.
Let be odd. Using the result in Lemma 2, equations (7), and (8), together with the symmetricity of the Hermite zeros, we obtain
The sum over is further bounded below by
where denotes the error function, and the last inequality holds for any odd . The sum over is further bounded above by
Using these bounds, we have