1. Introduction
A natural way to numerically calculate the integral of a function is to take a sequence and use the approximation
(1.1) |
Introduce the error
(1.2) |
In the Monte Carlo method (MC), one takes in (1.1) to be a sequence of random numbers sampled uniformly from . The expression inside the absolute value signs of (1.2
) is then a random variable with expected value 0 and standard deviation of the order
as , see e.g. [1].The quasi-Monte Carlo method (QMC) is based on instead taking a deterministic x in (1.1) with ”good spread” in . This can lead to a better convergence rate of (1.2) than when taking random x. In fact, there exist deterministic such that the rate of decay of is close to as (see below).
The aim of this note is to discuss error estimates for QMC on . More specifically, we establish an improvement of the elegant Koksma’s inequality. Koksma’s inequality is the main general error estimate for QMC, we need some auxiliary notions in order to formulate it.
Let . The star discrepancy of the set (i.e. set of the first terms of ) is given by
(Here, denotes the cardinality of the set .) In a sense, the quantity measures how much the distribution of the points of
deviates from the uniform distribution. One can show that for any
there holds . On the other hand, there is a sequence (called the van der Corput sequence, see [5]) such that .The total -variation of a function is given by
where the supremum is taken over all non-overlapping collections of intervals contained in and if . If we say that has bounded -variation (written ). Note that
See [4] for a thorough discussion of bounded -variation and applications.
Koksma’s inequality states that
(1.3) |
In other words, the error of QMC is bounded by a product of two factors, the first measuring the ”spread” of the sequence and the second measuring the variation of the integrand . An immediate consequence of (1.3) is that we obtain the ”almost optimal” error rate if and is the previously mentioned van der Corput sequence.
A drawback of (1.3) is that it provides no error estimate in the case when . For instance, we were originally interested in finding a general error estimates for (see Corollary 1.2 below). This led us to our main result (Theorem 1.1), which is a sharpening of (1.3) that is effective also when . In fact, Theorem 1.1 provides an estimate of (1.2) for any function.
For this, we recall the notion of modulus of variation, first introduced in [6] (see also [2]). For any , we set
where the supremum is taken over all non-overlapping collections of at most sub-intervals of . An attractive feature of the modulus of variation is that it is finite for any bounded function. Of course, if and only if as and the growth of then tells us how ”badly” a function has unbounded 1-variation.
The next result is our main theorem.
Theorem 1.1.
For any function and there holds
(1.4) |
The constant 13 in (1.4) is a consequence of our method of proof and certainly not optimal. However, the main point is that we can replace the total variation in (1.3) with a quantity that is finite for all functions. An immediate corollary of (1.4) is a Koksma-type inequality for (which is not possible to derive from (1.3)).
Corollary 1.2.
If , then
(1.5) |
In a sense the estimate (1.5) cannot be improved: there is a constant such that for any we can find a sequence and a function with and
We shall also discuss error estimates for functions with some continuity properties. Our result here (Corollary 1.3) is known, see [5], but Theorem 1.1 allows us to derive it in a very simple way. Define the modulus of continuity of by
and let be a non-decreasing function with , strictly concave and differentiable on (0,1). We denote by the class of functions such that
In particular, if , then is the space of -Hölder continuous functions.
Corollary 1.3 (see e.g. [5], p. 146).
If , then
2. Proofs
We first state a few results that we will use in the proof of Theorem 1.1.
Lemma 2.1.
Let and be the continuous first-order spline interpolating
Proof.
Let be the subset of consisting of points of local extremum of . It is easy to see that
where the last inequality holds since the sum extends over at most terms. ∎
Lemma 2.2.
Let be a measurable function and an interval. Then there exists such that
Proof.
Clearly we cannot have for all , since then it would follow
which is of course a contradiction. ∎
We now prove Theorem 1.1.
Proof of Theorem 1.1.
Without loss of generality, we may assume that
Set and and let be the continuous first-order spline interpolating at the knots . Then
Hence,
By (1.3) and Lemma 2.1, we have
(2.1) |
We shall estimate . We have
By Lemma 2.2, there are for such that
Thus,
(2.2) |
where . We shall first prove that
(2.3) |
Indeed, the discrepancy of is defined as
It is well-known (see [5], p. 91) that
Note that if we have
Thus, for we have
A similar inequality holds for . This proves (2.3). Hence, by (2.2), we have
Furthermore,
and it follows that
Consequently,
and by (2.1) we obtain
∎
Proposition 2.3.
Let , then we have
(2.4) |
and
(2.5) |
Proof.
The inequality (2.4) follows immediately from Hölder’s inequality. For (2.5), take intervals , then clearly
Define
(2.6) |
Since and is non-decreasing, we clearly have
To calculate , we use Lagrange multipliers. The critical point of the Lagrangian function solves
By the strict concavity of , the above system has the unique solution . Hence, the maximum (2.6) is
∎
References
- [1] R. E. Caflisch, ”Monte Carlo and quasi-Monte Carlo methods”, Acta Numerica 7 (1998), 1–49.
- [2] Z. A. Chanturiya, ”The modulus of variation of a function and its application in the theory of Fourier series”, Dokl. Akad. Nauk SSSR 214 (1974), 63–66.
- [3] J. Dick and F. Pillichshammer, Digital Nets and Sequences: Discrepancy Theory and Quasi–Monte Carlo Integration, Cambridge University Press, Cambridge, 2010.
- [4] R. M. Dudley and R. Norvaiša, Differentiability of Six Operators on Nonsmooth Functions and -Variation, Wwth the collaboration of Jinghua Qian. Lecture Notes in Mathematics, 1703. Springer-Verlag, Berlin, 1999.
- [5] L. Kuipers and H. Niederreiter, Uniform Distribution of Sequences, John Wiley and Sons, New York, 1974.
- [6] R. Lagrange, ”Sur les oscillations d’ordre supérieur d’une fonction numérique”, Ann. Scient. Ecole Norm. Sup. 82 (1965) 101-130.
- [7] A. B. Owen, Monte Carlo theory, methods and examples, preprint, Stanford University, 2013.