1 Introduction
We study error bounds for quadrature formulas and assume that the integrand is from a Hilbert space of real valued functions defined on a set . We assume that function evaluation is continuous and hence are dealing with a reproducing kernel Hilbert space (RKHS) with a kernel . We want to compute for , where is a continuous linear functional, hence for some . We consider, for and , quadrature formulas defined by
Then the worst case error (on the unit ball of ) of is defined by
If we fix a set of sample points we may define the radius of information by
Our main interest is in the optimization of as well as of the weights . Then we obtain the th minimal worst case error
We are mainly interested in tensor product problems. We will therefore assume that is a RKHS on a domain with kernel for all and that
is the tensor product of these spaces. That is,
is a RKHS on with reproducing kernelIf and for , we will denote by the tensor product of the functions , i.e.,
We study the tensor product functional on . Note that in this paper we assume that is a tensor product functional, but the results can also be applied to operators, see [15].
The complexity of the tensor product problem is given by the numbers and has been studied in many papers for a long time. Traditionally, the functional and the dimension was fixed and the interest was on large . Here we are mainly interested in the curse of dimensionality: Do we need exponentially many (in ) function values to obtain an error when we fix the error demand and vary the dimension?
To answer this question one has to prove upper bounds as well as lower bounds. Upper bounds for specific problems can often be proved by quasi Monte Carlo methods, see [2]. In addition there exists a general method, the analysis of the Smolyak algorithm, see [14, 20] and the recent supplement [16].
In this paper we concentrate on lower bounds, again for a fixed error demand and (possibly) large dimension. Such bounds were first studied in [11] for certain special problems and later in [12] with the technique of decomposable kernels. This technique is rather general as long as we consider finite smoothness. The technique does not work, however, for analytic functions.
In contrast, the approach of [19] also works for polynomials and other analytic functions. We continue this approach since it opens the door for more lower bounds under general assumptions. One result of this paper (Theorem 10) reads as follows:
Theorem 10.
For all , let be a RKHS and let be a bounded linear functional on with unit norm and nonnegative representer . Assume that there are functions and in and a number such that is orthonormal in and . Then the tensor product problem satisfies for all that
In particular, we obtain the curse of dimensionality if all the are equal. As an application, we use this result to obtain lower bounds for the complexity of the integration problem on Korobov spaces with increasing smoothness, see Section 4.1. These lower bounds complement existing upper bounds from [14, Section 10.7.4].
The paper is organized as follows. We first provide a general connection between the worst case error of quadrature formulas and the positive semidefiniteness of certain matrices in Section 2. We then turn to tensor product problems. We start with homogeneous tensor products (i.e., all factors and are equal), see Section 3, where we also consider several examples. The nonhomogeneous case is then discussed in Section 4. This section also contains the results for Korobov spaces with increasing smoothness. Section 3 and Section 4 are based on a recent generalization of Schur’s product theorem from [19]. In Section 5, we discuss further generalizations of Schur’s theorem and possible applications to numerical integration. Finally, in Section 6, we consider lower bounds for the error of quadrature formulas that use random point sets (as opposed to optimal point sets). This allows us to approach situations where we conjecture but cannot prove the curse of dimensionality for optimal point sets.
2 Lower bounds and positive definiteness
We begin with a somewhat surprising result: Lower bounds for the worst case error of quadrature formulas are equivalent to the statement that certain matrices are positive semidefinite.
Proposition 1.
Let be a RKHS on with kernel and let for some .

The following are equivalent for all and .

The matrix is positive semidefinite,

.


The following are equivalent for all and .

The matrix is positive semidefinite for all ,

.

Proof.
To prove the first part, we fix . For consider the quadrature rule with
Clearly, we have
The function with attains its minimum for
where is interpreted as . This yields
The last expression is larger or equal to if, and only if,
holds for all , i.e. when the matrix is positive semidefinite. This yields the statement.
The proof of the second part follows from the first part by taking the infimum over all ∎
The idea now is to use some properties of the Schur product of matrices. We denote by the diagonal entries of whenever . Moreover, if are two symmetric matrices, we write if is positive semidefinite. The Schur product of and is the matrix with for . The classical Schur product theorem states that the Schur product of two positive semidefinite matrices is again positive semidefinite. However, this statement can be improved [19].
Proposition 2.
Let be a positive semidefinite matrix. Then
A direct proof of Proposition 2 may be found in [19]. As pointed out to the authors by Dmitriy Bilyk, the result follows also from the theory of positive definite functions on the spheres as developed in the classical work of Schoenberg [17]. To sketch this approach, let denote the sequence of Gegenbauer (or ultraspherical) polynomials. These are polynomials of order on , which are orthonormal with respect to the weight . Here, is a real parameter. By the Addition Theorem [1, Theorem 9.6.3], there is a positive constant , which depends only on and , such that
(1) 
where is the unit sphere in and form an orthonormal basis of the space of harmonic polynomials of degree in .
If now is a positive semidefinite matrix with ones on the diagonal and with , then is also positive semidefinite. Indeed, we can write
for some vectors
and use (1) to compute for everyFor positive semidefinite matrices with ones on the diagonal, Proposition 2 then follows by observing that is (up to a multiplicative constant) exactly the polynomial Finally, the general form of Proposition 2 is given by a simple scaling argument.
3 Homogeneous tensor products
We now use Propositions 1 and 2 in order to obtain the curse of dimensionality for certain tensor product (integration) problems. In this section, we consider homogeneous tensor products, i.e., and . Moreover, we work with normalized problems, i.e., we assume that .
Theorem 3.
Let be a RKHS on . Assume that there are functions and on such that and are orthonormal in and let . Then the tensor product problem satisfies
In particular, it suffers from the curse of dimensionality.
Proof.
Without loss of generality, we may assume that is 3dimensional, i.e., , , and form an orthonormal basis. The function
is a reproducing kernel on . The reproducing kernel of satisfies
for all . Moreover, we have for all . Therefore, also and for , where is the times tensor product of and is the times tensor product of . By Proposition 2 the matrix
is positive semidefinite for all . Proposition 1 yields that
The second statement is implied by the first statement; observe that the problem is normalized since for every . ∎
Let us consider several applications of this result.
3.1 Trigonometric polynomials of degree 1
This example is already contained in Vybíral [19]; now we can see it as an application of the general Theorem 3. Take and on . Then one obtains and and . Hence we study, for , trigonometric polynomials of degree 1 on the interval with the norm
For we take the tensor product space with the kernel
We obtain and is the norm of the embedding of into the space of continuous functions with the sup norm. Hence functions in the unit ball of may take large values if is large, but the integral is bounded by one. By applying Theorem 3 we obtain the following result of [19] that solved an open problem of [10], see also [6].
Corollary 4.
Let be the RKHS on with the orthonormal system , and . Then the integration problem on the tensor product space satisfies
In particular, it suffers from the curse of dimensionality.
Remark 1.
The same vector space with dimension was studied earlier by Sloan and Woźniakowski [18] who proved the curse of dimensionality for a different norm. It follows already from this work that exactly function values are needed for the exact integration of trigonometric polynomials of degree 1 (in each variable). We do not know whether this result was known even before.
3.2 Gaussian integration for polynomials of degree 2
Let be the space of polynomials on with degree at most 2, equipped with the scalar product
where is the standard Gaussian measure on . We consider the integration problem
The tensor product problem for is given by the functional
on the tensor product space , which consists of all variate polynomials of mixed order 2 or less. Here, is the standard Gaussian measure on . By Theorem 3, this problem suffers from the curse of dimensionality. We have
To see this, it is enough to choose and and observe that the functions are orthonormal in . Using the notation from the proof of Theorem 3, we obtain , and
Corollary 5.
Take the RKHS on which is generated by the orthonormal system , and . Then the problem of Gaussian integration on the tensor product space satisfies
In particular, we obtain the curse of dimensionality and the fact that exactly function values are needed for exact integration.
3.3 Integration for polynomials of degree 2 on
Let be the space of polynomials on with degree at most 2, defined on an interval of unit length. For convenience and symmetry we take the interval . The univariate problem is given by and for our construction we need . For and we obtain and and hence . If we apply Theorem 3 then we obtain the following.
Corollary 6.
Take the RKHS on which is generated by the orthonormal system , and . Then the integration problem on satisfies
In particular, we obtain the curse of dimensionality and the fact that exactly function values are needed for the exact integration.
The norm in is a weighted norm of Taylor coefficients. For and we obtain the norm
or
Observe that we are “forced” by our approach to take this norm with these very specific parameters, although one can use embeddings and slightly modified norms. For the given norm we obtain
and (or in the multivariate case) is the norm of the embedding of into the space of continuous functions with the sup norm. Hence functions in the unit ball of may take large values if is large, but the integral is bounded by one.
3.4 Integration of functions with zero boundary conditions
As another application of Theorem 3, we consider the integration of smooth functions with zero on the boundary. For that sake, let and for . Further, let be a threedimensional space spanned by
(2)  
which form an orthonormal basis of . The functions are defined as in the proof of Theorem 3. We consider the integration problem on
and its tensor product version on . We observe that and for all
Corollary 7.
Let be the RKHS on with the orthonormal basis defined in (2). Then the integration problem satisfies
i.e. it suffers from the curse of dimensionality.
Remark 2.
Let us observe that every satisfies . This means that the functions from and all their partial derivatives of order at most one in any of the variables vanish on the boundary of the unit cube. Furthermore, the norm on can be given for example as
3.5 Hilbert spaces with decomposable kernels
Another known method to prove lower bounds for tensor product functionals works for so called decomposable kernels and slight modifications, see [14, Chapter 11]. There is some intersection where our method and the decomposable kernel method both work.
Let be a RKHS on with reproducing kernel . The kernel is called decomposable if there exists such that the sets
are nonempty and if or . If is decomposable, then is an orthonormal sum of and consisting of the functions in with support in and , respectively.
Choosing now arbitrary suitably scaled functions with support in and with support in such that and , we automatically have that and are orthonormal in and . The proof of Theorem 3 is easily adapted to this case and gives the next corollary.
Corollary 8.
Let be a RKHS on with decomposable reproducing kernel. Let and be as above and let . Then the tensor product problem satisfies
In particular, it suffers from the curse of dimensionality.
One particular example, where this corollary is applicable, is the centered discrepancy. Here consists of absolutely continuous functions on with and . The norm of in is the norm of . The kernel of is , the normalized representer of the integration problem is . Then is the normalized restriction of to the interval , similarly, is the normalized restriction of to the interval . Since is nonnegative, such functions and exist.
Corollary 8 is a special case (for ) of [14, Theorem 11.8]. As such, it will not give any new results. Nevertheless, it seems appropriate to note the connection. It would be interesting to know if the full strength of [14, Theorem 11.8] can be obtained via this approach or the variants described in the next section.
3.6 Exact Integration
Based on the results above one may ask whether
for all nontrivial tensor product problems. Here a problem is called trivial if , then we have also for all . The answer is “no”, examples with but can be found in [14, Section 11.3] which is based on [11]. We obtain the following criterion.
Corollary 9.
If there are functions and such that are linearly independent with and , then
4 Nonhomogeneous tensor products
We now turn to tensor products whose factors and may be different for each . We start with the following generalization of Theorem 3, which involves an additional parameter .
Theorem 10.
For all , let be a RKHS and let be a bounded linear functional on with unit norm and nonnegative representer . Assume that there are functions and in and a number such that is orthonormal in and . Then the tensor product problem satisfies for all that
Proof.
Let be the domain of the space . Without loss of generality, we may assume that is an orthonormal basis of . In this case, the reproducing kernel of is given by
Let us consider the functions
on the domain of . These functions are well defined since and linearly independent since and are linearly independent. The function
is a reproducing kernel on and its diagonal is . A simple computation shows for all that
Let now be the reproducing kernel of the product space with domain and let . We have
where
The application of Proposition 2 yields
and hence
where is the representer of the product functional . Summing over all subsets , we arrive at
Comments
There are no comments yet.