1 Introduction and main results
The discrepancy of an -element point set in the unit cube measures the deviation of the empirical distribution of from the uniform measure. This concept has important applications in numerical analysis, where so-called Koksma-Hlawka inequalities establish a deep connection between norms of the discrepancy function and worst case errors of quasi-Monte Carlo integration rules determined by the point set . For a comprehensive introduction and exposition on this subject we refer the reader to [5, 10, 12] and the references cited therein.
To define the concept of discrepancy, we first introduce the local discrepancy function defined as
where for and stands for the -dimensional Lebesgue measure. We now apply a norm to the local discrepancy function to obtain the discrepancy of the point set with respect to the norm . Of particular interest are the norms on the usual Lebesgue spaces () of -integrable functions on the unit cube . Those lead to the central notions of -discrepancy for , and the -discrepancy, which is usually called the star-discrepancy, when .
The minimal discrepancy with respect to the norm in dimension is the best possible discrepancy over all point sets of size in the -dimensional unit cube , i.e.,
We compare this value with the initial discrepancy given by the discrepancy of the empty point set . Since the initial discrepancy may depend on the dimension, we use it to normalize the minimal discrepancy when we study the dependence of on the dimension . We therefore define the inverse of the minimal discrepancy in dimension as the number which is the smallest number such that a point set with points exists that reduces the initial discrepancy at least by a factor of ,
In this paper we are interested in how depends simultaneously on and the dimension . In general, the dependence of the inverse of the minimal discrepancy can take different forms. For instance, if the dependence on the dimension or on is exponential, then we call the discrepancy intractable. If the inverse of the minimal discrepancy grows exponentially fast in
, then the discrepancy is said to suffer from the curse of dimensionality. On the other hand, ifincreases at most polynomially in and , as increases and tends to zero, then the discrepancy is said to be polynomially tractable. This leads us to the following definition.
The discrepancy with respect to the norm is polynomially tractable if there are numbers , , and such that
The infimum over all exponents such that a bound of the form (1) holds is called the -exponent of polynomial tractability.
To cover cases between polynomial tractability and intractability, we now introduce the concept of weak tractability, where is not exponential in and . This encodes the absence of intractability.
The discrepancy with respect to the norm is weakly tractable, if
The subject of tractability of multivariate problems is a very popular and active area of research and we refer the reader to the books [15, 16] by Novak and Woźniakowski for an introduction into tractability studies of discrepancy and an exhaustive exposition.
A famous result by Heinrich, Novak, Wasilkowski, and Woźniakowski  based on the theory of empirical processes and Talagrand’s majorizing measure theorem shows that the star-discrepancy is polynomially tractable. In fact, they show that in Definition 1 can be set to one and hence in this case the inverse of the star-discrepancy depends at most linearly on the dimension . It was shown in  and  that is the minimal possible in Definition 1 for the star-discrepancy. Determining the optimal exponent for is an open problem. On the other hand, the -discrepancy is known to be intractable, as shown by Woźniakowski  (see also ). The behavior of the inverse of the -discrepancy in between, where , seems to be unknown.
Note that due to the normalization with the initial discrepancy, we cannot infer a continuous change in the behavior of as goes from to . A natural assumption seems to be that the -discrepancy is intractable for any . If correct, this would mean that there is a sharp change from intractability to polynomial tractability as one goes from to . A natural question which hence arises is what happens between those two cases and .
To study this question, we introduce for the exponential Orlicz norms , which for a measurable function defined on are given by
where . The assumption guarantees the convexity of . These norms play an important role in the study of the concentration of mass in high-dimensional convex bodies and we refer the reader to [3, 4] and  for more information. An introduction to the theory of Orlicz spaces can be found in . As we shall see later, the discrepancy with respect to -norms turns out to be polynomially tractable as well.
In our context it is interesting to also study variations of these norms exhibiting different types of behavior of as a function of the dimension . In fact, we may write as the series
and consider the more general case where is replaced by a function
for a non-decreasing function with . Note that the growth condition on guarantees, according to the ratio test, the absolute convergence of the series (2) for all . Choosing takes us back to the -norm, which is therefore a special case of the more general setting.
Below we will characterize functions for which the discrepancy with respect to , given by
is polynomially tractable and weakly tractable. In general, if is zero in zero, increasing, convex, and satisfies , then is called an -function and is a norm. The limit assumptions simply guarantee that the convex-dual is again an -function. Such types of norms are known as Luxemburg norms, named after W. A. J. Luxemburg . One typically just speaks of Orlicz functions and Orlicz norms.
The aim of this paper is to show the following result.
Let . Then the following hold:
The discrepancy with respect to the -norm is polynomially tractable.
For any for which there exists an and a constant such that for all
the discrepancy with respect to is polynomially tractable. The -exponent of polynomial tractability is at most .
For any which satisfies
the discrepancy with respect to is weakly tractable.
Note that by choosing we obtain the classical -norm. In this case and for all and otherwise. This choice of does not satisfy any of the conditions in Theorem 1.
We can in fact provide a more accurate estimate for the exponential Orlicz norms and the-exponent of polynomial tractability.
For any , we have
In particular, the -exponent of polynomial tractability is at most .
This upper bound on shows that for the inverse of the star-discrepancy depends linearly on the dimension, thereby matching the result of Heinrich, Novak, Wasilkowski, and Woźniakowski .
In the following Section 2 we present the proofs of our main results, where we start by establishing an equivalence between the norms and an expression involving a supremum of classical -norms. Subsection 2.1 is then devoted to the proof of Theorem 1. The proof of Theorem 2 will be presented in Subsection 2.2.
2 The proofs
with . In the special case of exponential Orlicz norms such an equivalence is a classical result in asymptotic geometric analysis and may be found, without explicit constants, in the monographs [3, Lemma 3.5.5] and [4, Lemma 2.4.2]. In the context of this paper it is important that these constants do not depend on the dimension .
Let and . For any measurable function , we have the estimates
In particular, for any , we have
Using the series expansion of , we obtain
Therefore, we have
This implies the upper bound in (6) for all .
In the opposite direction, we need to choose such that
which implies that
If , and , then
In any case, for all , we have that
which implies the result since .
The bound (7) for the -norms can be shown using similar arguments together with Stirling’s formula
We are now prepared to present the proofs of our main results.
2.1 The proof of Theorem 1
An important consequence of Lemma 1 is that the constants do not depend on the dimension, and hence the Orlicz norm discrepancy satisfies the same tractability properties as the discrepancy with respect to the norm . Therefore in the following proof we will only use the latter norm.
It is well known and easily checked (see, e.g., [16, p. 54]) that for every , the initial -discrepancy in dimension satisfies
If , then the initial discrepancy is for every dimension . This implies that
where we used the choice to obtain the last inequality.
From  we know that
Hence, we have
where stands for the discrepancy with respect to the norm introduced in (5). This implies that
where for , . This concludes the proof of the second statement in Theorem 1. As mentioned above, if we choose , then we obtain the -norm. Using Stirling’s formula (8) together with the previous result, we can deduce the first part of Theorem 1.
2.2 The proof of Theorem 2
First we show the corresponding result for which is based on the norm . Recall that for a measurable function , we defined . Let us start with a lower bound for the initial discrepancy. We have
where we have chosen . The final estimate follows from the fact that
attains its minimum in with minimal value .
Now let . Then from Gnewuch [6, Theorem 3] we obtain that
and from Aistleitner and Hofer [2, Corollary 1] that for any
where the expectation and probability are with respect to the point set
consisting of independent and uniformly distributed points. Now Markov’s inequality implies that there exists an-element point set in such that
For this point set , we obtain
for all . Note that we may choose leading to .
Using the second part of Lemma 1, we obtain
From this we finally obtain the upper bound for .
-  Ch. Aistleitner. Covering numbers, dyadic chaining and discrepancy. J. Complexity, 27(6):531–540, 2011.
-  Ch. Aistleitner and M. Hofer. Probabilistic discrepancy bound for Monte Carlo point sets. Math. Comp., 83(287):1373–1381, 2014.
-  S. Artstein-Avidan, A. Giannopoulos, and V. D. Milman. Asymptotic geometric analysis. Part I, volume 202 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2015.
-  S. Brazitikos, A. Giannopoulos, P. Valettas, and B.-H. Vritsiou. Geometry of isotropic convex bodies, volume 196 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2014.
-  J. Dick and F. Pillichshammer. Digital nets and sequences. Discrepancy theory and quasi-Monte Carlo integration. Cambridge University Press, Cambridge, 2010.
-  M. Gnewuch. Bounds for the average -extreme and the -extreme discrepancy. Electron. J. Combin., 12:Research Paper 54, 11, 2005.
-  M. Gnewuch and N. Hebbinghaus. private communications, 2018.
-  S. Heinrich, E. Novak, G. W. Wasilkowski, and H. Woźniakowski. The inverse of the star-discrepancy depends linearly on the dimension. Acta Arith., 96(3):279–302, 2001.
-  A. Hinrichs. Covering numbers, Vapnik-Červonenkis classes and bounds for the star-discrepancy. J. Complexity, 20(4):477–483, 2004.
-  A. Hinrichs. Discrepancy, integration and tractability. In Monte Carlo and quasi-Monte Carlo methods 2012, volume 65 of Springer Proc. Math. Stat., pages 129–172. Springer, Heidelberg, 2013.
-  M. A. Krasnosel’skiĭ and Ja. B. Rutickiĭ. Convex functions and Orlicz spaces. Translated from the first Russian edition by Leo F. Boron. P. Noordhoff Ltd., Groningen, 1961.
-  L. Kuipers and H. Niederreiter. Uniform distribution of sequences. Wiley-Interscience [John Wiley & Sons], New York-London-Sydney, 1974. Pure and Applied Mathematics.
-  J. Lindenstrauss and L. Tzafriri. Classical Banach spaces. I, volume 92 of Sequence spaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer-Verlag, Berlin-New York, 1977.
-  W. A. J. Luxemburg. Banach function spaces. Thesis, Technische Hogeschool te Delft, 1955.
-  E. Novak and H. Woźniakowski. Tractability of multivariate problems. Vol. 1: Linear information, volume 6 of EMS Tracts in Mathematics. European Mathematical Society (EMS), Zürich, 2008.
-  E. Novak and H. Woźniakowski. Tractability of multivariate problems. Volume II: Standard information for functionals, volume 12 of EMS Tracts in Mathematics. European Mathematical Society (EMS), Zürich, 2010.
-  H. Woźniakowski. Efficiency of quasi-Monte Carlo algorithms for high dimensional integrals. In Monte Carlo and quasi-Monte Carlo methods 1998 (Claremont, CA), pages 114–136. Springer, Berlin, 2000.