1. Introduction and Results
For and a point set we define the dispersion of by
where the supremum is over all axisparallel boxes with intervals , and denotes the (Lebesgue) volume of . As we are interested in point sets which make the above quantity as small as possible, we additionally define, for , the thminimal dispersion
and, for , its inverse function
Hence, is the minimal cardinality of a point set that has dispersion smaller than .
Besides the fact that the above geometric quantities are interesting in its own right, they also attracted attention in the numerical analysis community, especially when it comes to very high dimensional applications. The reason is that bounds on the (minimal) dispersion lead to bounds on worstcase errors, and hence the complexity, for some numerical problems, including optimization and approximation in various settings, see [5, 27, 30, 32, 36, 37, 40]. This is a similar situation as for the much more studied discrepancy, which corresponds to certain numerical integration problems, see e.g. [8, 9, 12, 28, 29, 31].
Moreover, bounding the dispersion is clearly also related to the problem of finding the largest empty box. In dimension two, this is the Maximum Empty Rectangle Problem, which is one of the oldest problems in computational geometry. For the state of the art and further references we refer to [13, 14, 15, 16, 24].
Regarding bounds on the inverse of the minimal dispersion, we have
for some and all , see [1], where the upper bound is attained by certain digital nets. See also [14, 32] for a related upper bound. Although these bounds show the correct dependence on , they are rather bad with respect to the dimension . This gap was narrowed in the past years by several authors, see [21, 33, 35, 39], including the important work of Sosnovec who proved that the logarithmic dependence on is optimal. With this respect, the best bound at present is
(1.1) 
see [39] and Theorem 4.1 below. Note that the logarithmic dependence is special for the cube, as it is known that, for the same problem on the torus, we have a lower bound linear in , see [38].
The main drawback of these results is that they only show the existence of point sets with small dispersion. The only explicit constructions we are aware of are the above mentioned digital nets, which lead to a bad dependence, and sparse grids, which satisfy the upper bound , see [21]. It is already clear from the number of points, that both do not lead to constructions of point sets with small dispersion that can be carried out in time that is polynomial in . Moreover, as the existence proofs are based on random points on a finite grid, one could use the naive algorithm: try each of the possible configurations, calculate its dispersion (if possible) and output a set that satisfies the requested bound. The runningtime of this algorithm is (in the worst case) clearly at least exponential in .
Remark 1.1.
Note that the decision problem, if a given point set has discrepancy smaller than , is known to be NPhard, see [18] or [11, Section 3.3], and the same is true for the dispersion, if the dimension is part of the input, cf. [6].
For upper bounds on the cost of constructing points with small discrepancy and further literature, see [10, 11, 17]. However, all known algorithms for this problem so far have runningtime at least exponential in . We hope that the results of this paper will lead to some progress also for this problem.
Reconsidering the bound (1.1), one may hope that points with small dispersion may be constructable also in very high dimensions. Ideally, we would like to have algorithms for the construction of point sets of size with dispersion at most , whose computational cost is polynomial in and . However, this seems to be out of reach. (Note that the output already costs .)
Here, we focus on the dependence on and show, using deep results from the theory of errorcorrecting codes, that point sets with small dispersion of size can be constructed in time that is polynomial in . Unfortunately, we do not have a good control on the dependence on . It remains an open problem to find fullypolynomial constructions for the dispersion.
Our two main results are derandomized versions of the results from [35] and [39]. Both use different approaches and lead to somewhat different results. The first one, as discussed in Section 3, leads to point sets of size that can be constructed in time linear in the size of the output, i.e., in time , see Algorithm 1 and Theorem 3.3. Here, and in the following, means that the implied constant depends on in an unspecified way. A second, and much more involved, construction will be given by Algorithm 2 in Section 4. The corresponding result reads as follows.
Theorem 4.4 Let and . Then, there is an absolute constant , such that Algorithm 2 constructs a set with and
where .
The runningtime of Algorithm 2 is for some .
Note that, in contrast to Algorithm 1, the point set that is constructed by Algorithm 2 has size that is polynomial in . However, as the proof shows, its computational cost is much larger.
Our general approach is as following. We start with a detailed inspection of the random constructions of point sets with small dispersion. This will allow us to clearly separate the setting of the construction and its randomized part. It will turn out then, which properties are crucial for each of the approaches and what exactly is the role of randomness. Afterwards, we replace the randomized part by a deterministic one.
2. Basics from Coding Theory
Our deterministic constructions of point sets with small dispersion will be essentially obtained by certain “derandomization” of recent proofs from [35, 39]. As we will rely on rather deep tools from coding theory, we summarize in this section the necessary definitions and results for later use.
2.1. Universal sets
First, we introduce the concept of universal sets, which is also known in coding theory under the name of independent set problem. It has its roots in testing of logical circuits [34].
Definition 2.1 (universal sets).
Let be positive integers. We say that is an universal set, if for every index set with , the projection of on contains all possible configurations.
Naturally one is interested in (randomized and deterministic) constructions of small universal sets. The straightforward randomized construction provides the existence of an universal set of size . On the other hand, [20] gives a lower bound on the size of an universal set of the order . There exist several deterministic constructions of universal sets in the literature (cf. [2, 3, 25]) and we shall rely on the results given in [26].
Theorem 2.2 ([26, Theorem 6]).
There is a deterministic construction of an universal set of size , which can be listed in linear time of the length of the output.
Although the notion of an universal set is not very flexible and comes from a different area of mathematics, we will see in Section 3, that there is indeed a link to sets with small dispersion. Reusing the known results from coding theory, it will already allow us to obtain our first deterministic construction of a point set with cardinality of order . However, in this approach we have only very limited control of the dependence on .
We also need the following natural generalization of universal sets. We could not find this concept in the literature, but we assume that this and the proceeding lemma are known.
Definition 2.3 (universal sets).
Let and be positive integers. We say that is an universal set, if for every index set with , the projection of on contains all possible configurations.
If , Definitions 2.1 and 2.3 coincide, i.e. universal sets are just the usual universal sets. We use the following two observations to transfer the known results about universal sets to our setting.
Lemma 2.4.
Let and be positive integers.

Let be an universal set. Then there is an universal set of at most the same size.

Let and be an universal set. Then there is an universal set of the same size.
Proof.
The proof is quite straightforward. To show , just replace all occurrences of among the coordinates of by zero. For the proof of the second part it is enough to interpret each as a digital representation of . ∎
The direct random construction yields the existence of an universal set of the size . Using Theorem 2.2, we can easily obtain a deterministic construction of an universal set of only a slightly larger size.
Theorem 2.5.
There is a deterministic construction of an universal set of size , which can be listed in linear time of the length of the output.
2.2. restriction problems
For the derandomization of the analysis of [39] we need a more flexible notion of the socalled restriction problems, see [26, Section 2.2]. Solutions to these problems will be one of the building blocks of our deterministic construction of sets with small dispersion whose size is polynomial in and, still, logarithmic in .
Definition 2.6 (restriction problems).
Let be positive integers and let be invariant under the permutations of the index set . We say that satisfies the restriction problem with respect to , if for every with and for every , there exists with
Definitions 2.1 and 2.3 are indeed special cases of Definition 2.6. To show this, let us choose and let be all the different singleton subsets of . Then, satisfies the restriction problem with respect to if for every index set and every possible there exists an with , i.e. if the restriction of to every index set with elements attains all possible values.
An important parameter of the restriction problems is the minimal size of each of the restriction sets , i.e.
(2.1) 
Random constructions of sets satisfying the restriction problem with parameters and are based on a simple union bound. Indeed, let and with
be fixed. The probability, that a randomly chosen vector
satisfies on is at leastIf we choose random vectors independently, the probability that none of them satisfies at is at most
Finally, the probability that there is a set with and , such that no satisfies at is, by the union bound, at most
This expression is smaller than one if
This means that there exist solutions to a restriction problem with parameters of size whenever
where is from (2.1). Theorem 1 of [26] states that there is a deterministic algorithm that outputs such a solution of size equaling the union bound. The main idea of its proof is that the random sampling can be replaced by an extensive search through a wise independent probability space with random variables with values in
Theorem 2.7 ([26, Theorem 1]).
For any restriction problem with parameters with , there is a deterministic algorithm that outputs a collection obeying the restrictions, with the size of the collection equaling
where is from (2.1). The time taken to output the collection is
where is the time complexity of the membership oracle.
Here, the membership oracle is a procedure which, for given , with and , outputs if the restriction of on belongs to In what follows it can be executed in time.
2.3. Splitters
The last ingredient of our derandomization procedure are splitters. They played a central role in [26] as the basic building blocks of all deterministic constructions given there. Essentially, they allow to split a large problem into smaller problems which can then be treated by the extensive search of Theorem 2.7.
Definition 2.8 (splitter).
Let be positive integers. An splitter is a family of functions from to , such that for every with there is , which splits perfectly. It means that the sets are of the same size for all (or as similar as possible if ).
Similarly to [26] and [4], we will rely on splitters. By Definition 2.8, is an splitter, if it is a collection of mappings such that for every with , there is an , which is injective on .
Small splitters can be obtained from asymptotically good error correcting codes, see [4, Lemma 3]. Indeed, let denote the codewords of an error correcting code of length over the alphabet and normalized Hamming distance at least . This is, and () can be equal on at most coordinates. If now with , then there are pairs with . As , there must be a coordinate, where all the codewords differ from each other. Finally, if we consider the mappings , we observe that is an splitter of size .
Such explicit codes exist by [3] with . To see this, note that the rate of a code as above is defined by . By [3, eq. (5)], see also [4, Lemma 3], we obtain that an explicit code with normalized Hamming distance at least exist with
where is an absolute constant and . Simple computations show that this implies for some . This, in turn, implies that we can choose .
The explicit construction of [3] yields a linear code that is based on a twofold concatenation code that combines the Wozencraft ensemble, Justesen codes and expander codes, which in turn rely on famous deterministic constructions of expander graphs [23]. This construction is ’uniformly constructive’ (see [3]), i.e., the construction can be done in time growing only polynomially in . Furthermore, the code satisfies [3, eq. (5)] for all , where in our case. Note also that the runningtime depends only polynomially also on , cf. [26, Thm. 3(iv)]. For the details we refer to Sections 3 and 4 of [3].
The following lemma summarizes the discussion above and shows that splitters of relatively small size can be constructed explicitly in polynomial time.
Lemma 2.9 (cf. [4, Lemma 3]).
There is an explicit splitter of size
that can be constructed in polynomial time in and .
The splitters can be used whenever is “very large” compared to . Roughly speaking, and in the context of the present paper, we will transform a restriction problem of size to a restriction problem of size , which can then be solved using the results of Section 2.2. This will lead to construction algorithms with an apparently optimal dependence of their running time on the original problem size . This approach was already used to prove [26, Theorem 6], see Theorem 2.5.
3. Derandomization of Sosnovec’s proof
First, we consider the construction of Sosnovec [35], which gives logarithmic dependence of on but involves no special control of its dependence on . His main theorem was the following.
Theorem 3.1.
([35, Theorem 2]) For every , there exists a constant , such that for every there is a point set with and
We will see that this result can be essentially derandomized using results from coding theory, while loosing only a negligible factor. The drawback of this approach is the extremely bad dependence of on . We sketch the main ideas of Sosnovec’s proof.
3.1. Sosnovec’s proof  the setting
For an integer with , we define
The point set constructed will be a subset of . Furthermore, we define
(3.1) 
to be the set of all boxes with sides parallel to the coordinate axes and volume larger than
Let . The key observation of [35] is that the number of indices with is bounded from above by , a quantity independent on . To be more specific, if we denote
then for every . We will refer to as the set of “active indices” of If is not of the full possible size, we enlarge it by adding any of the other indices to obtain a set with cardinality equal to . Therefore, we can associate to each (possibly in a nonunique way) a set with and a vector such that any with lies in
Vice versa, if we have a point set , such that for every with and to every , there is some with , then, by what we just said, intersects every . Therefore, the dispersion of can not be larger than , i.e., and hence .
To simplify the combinatorial part later on, we multiply all coordinates by , which results to vectors with integer components. This motivates the following definition.
Definition 3.2.
Let . We say that satisfies the condition (S) of the order if for every with , the set of restrictions contains all possible values.
By what we said above, anytime and satisfies the condition (S) of the order , then with satisfies . The proof of Theorem 3.1 is therefore finished, once we find a set with satisfying the condition (S) of order .
3.2. Sosnovec’s proof  randomized construction
The rest of the proof in [35] can now be understood as a randomized construction of a small set satisfying the condition (S). Indeed, there are subsets of with elements. We now fix one such set and one vector . The probability that a point
chosen at random (from the uniform distribution) fulfills
is . Therefore, the probability that none of the randomly chosen points fulfills this restriction is . Finally, the probability that there is a set with and a vector such that no satisfies is, by the union bound, at mostBy simple calculus, if is of the order for large enough, this expression is smaller than one. Hence, with positive probability, the randomly chosen point set satisfies the condition (S).
3.3. Derandomization using universal sets
Definition 3.2 resembles very much the concept of universal sets, see Section 2.1. In particular, it is easy to see that every universal set after adding 1 to each coordinate satisfies condition (S) of the order .
Therefore, we can use Theorem 2.5 to replace the random arguments of the last section by a deterministic algorithm. We glue all the components together in the form of an algorithm.
Algorithm 1 For and , choose a positive integer with and set ; Generate an universal set as in [26, Theorem 6]; Interpret these vectors as digital decompositions to obtain an universal set; Replace by in all coordinates to obtain an universal set; Increase all the coordinates by one; (This set satisfies (S).) Finally, divide all the coordinates by and output the point set.
It remains to consider the runningtime of this algorithm.
Theorem 3.3.
Let and . Then there is a positive constant , such that Algorithm 1 constructs a set with and
The running time of Algorithm 1 is linear in the length of the output.
Proof.
The first part of the theorem is proven by what we said above. Concerning the runningtime, we obtain from Theorem 2.5, that the universal set, which we generate in steps (2)–(4) of Algorithm 1, can be constructed in linear time and with size
The remaining operations can be done in a linear time, without enlarging the point set.
∎
As expected, the dependence of the size of on is rather bad (as it was in [35]), but there is indeed only a logarithmic dependence on .
4. Improving the dependence in
The main aim of [39] was to refine the analysis of [35] and to achieve a better dependence of on , without sacrificing the logarithmic dependence on . The main theorem of [39] was the following.
Theorem 4.1.
Let be a natural number and let . Then there exists a point set with and
Also this result can be derandomized using results from coding theory. By doing this, we will lose some power of in the size of the point set. However, it will still be of order .
4.1. Enhanced analysis of the random construction
The main novelty of [39] was a more careful splitting of (see (3.1)) into subgroups. To be more specific (and using the notation of [19]), for and , we denoted to be those cubes from , which have approximately of the length and its left point around for all , i.e.
We denote by the pairs , for which is nonempty. It is easy to see that their number is bounded from above by
(4.1) 
see [19, eq. (2.1)]. If , we further set
(4.2) 
The advantage of dividing of into the groups is the surprisingly good control of the probability that a randomly chosen point lies in the intersection of all the cubes from It is actually, up to a constant, of the same order as the volume of each of the cubes in , i.e. of .
Lemma 4.2.
The aim of [39] was to combine (4.1) with Lemma 4.2 and the union bound. Indeed, the probability that a randomly chosen point avoids is at most Therefore, the probability that a set of randomly and independently generated points does not intersect is at most , i.e.
By the union bound over all , we get further
As was defined in (4.2) as the intersection of all cubes from , finding a point means that the same point may be found in all cubes in We conclude that if is large enough to ensure that
i.e., , then the randomly generated intersects every with positive probability. Hence, there exists with such that . This is essentially the result of [39].
4.2. Connection to restriction problems
By what we said above, if a point set intersects for all , then The randomized construction in [39], which we now want to replace by a deterministic one, was restricted in its choice of points to . Therefore we define
Definition 4.3.
Let . We say that satisfies the condition (S’) of the order if, for every it intersects
The rest of [39] then provides a randomized construction of a small set , which satisfies the condition of order . This task has two things in common with restriction problems. First, the system is invariant under permutations of and second, the number of active coordinates is, for every , bounded from above by
To build the connection between the condition and the restriction problems, we choose the quadruplet of parameters , see Definition 2.6, as . The system collects the sets for those which have the corresponding active coordinates in . Finally, is the cardinality of
More formally, let with for . Then, we set
and define
(4.3) 
We observe that a set satisfies the condition of order if, and only if, the set satisfies the restriction problem with respect to
4.3. A first attempt for derandomization
Using the arguments of the last section, one could use the construction from Theorem 2.7 directly to solve the corresponding restriction problem with parameters , whenever . This leads to a point set with and
Note that this bound matches the union bound from Section 4.1. However, the runningtime of the algorithm, as given by Theorem 2.7, is
where is the time complexity of the membership oracle, which can be assumed of the order in this case.
4.4. Derandomization using splitters
We now describe how we can improve the construction of an explicit solution to the desired restriction problem with parameters equal to and the set system defined by (4.3). We use the approach of [26] to obtain solutions of the restriction problem which are ’small’ in size and runningtime of the corresponding algorithm. ’Small’ means here, that the dependence on the original problem dimension is as small as possible.
In the heart of the constructions are splitters, see Section 2.3. As already indicated in Section 2.3, we use a splitter, say , to map the original dimensional problem to a restriction problem in dimension , which can then be solved with cost independent of .
Recall that, by Definition 2.8, is an splitter, if it is a collection of mappings such that for every with , there is an , which is injective on , i.e., has elements.
Further, let be the solution of the restriction problem with parameters with respect to the original system of restrictions , see (4.3). This means, that such that for every with and any there is with .
Now we are in the position to define the solution to the restriction problem with parameters and the system . Indeed, we define
to be the set of concatenations of any splitter with any element of the solution to the restriction problem. Here, we switch between the notion of vectors of length (resp. ) and mappings from (resp. ) to . This should not lead to any confusion.
To show that is indeed a solution to our restriction problem, let with . Then, there exists , such that has mutually different elements, i.e., . Now, for every , there is some , such that . Hence, , which satisfies , is a solution to the restriction problem with parameters .
We merge all the components together in a form of an algorithm.
Algorithm 2 For and , choose a positive integer with and set ; Generate a splitter as in Lemma 2.9; Generate a solution to the restriction problem with parameters and restrictions from (4.3) as in Theorem 2.7; Set ; Increase all the coordinates by one, then divide them by ; Output the resulting point set .
Theorem 4.4.
Let and . Then, there is an absolute constant , such that Algorithm 2 constructs a set with and
Comments
There are no comments yet.