DeepAI
Log In Sign Up

Covering of high-dimensional cubes and quantization

As the main problem, we consider covering of a d-dimensional cube by n balls with reasonably large d (10 or more) and reasonably small n, like n=100 or n=1000. We do not require the full coverage but only 90% or 95% coverage. We establish that efficient covering schemes have several important properties which are not seen in small dimensions and in asymptotical considerations, for very large n. One of these properties can be termed `do not try to cover the vertices' as the vertices of the cube and their close neighbourhoods are very hard to cover and for large d there are far too many of them. We clearly demonstrate that, contrary to a common belief, placing balls at points which form a low-discrepancy sequence in the cube, makes for a very inefficient covering scheme. For a family of random coverings, we are able to provide very accurate approximations to the coverage probability. We then extend our results to the problems of coverage of a cube by smaller cubes and quantization, the latter being also referred to as facility location. Along with theoretical considerations and derivation of approximations, we discuss results of a large-scale numerical investigation.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/16/2020

Efficient quantization and weak covering of high dimensional cubes

Let ℤ_n = {Z_1, ..., Z_n} be a design; that is, a collection of n points...
06/04/2020

Non-lattice covering and quanitization of high dimensional sets

The main problem considered in this paper is construction and theoretica...
04/06/2020

Directional approach to gradual cover: the continuous case

The objective of the cover location models is covering demand by facilit...
04/28/2019

Generalizing the Covering Path Problem on a Grid

We study the covering path problem on a grid of R^2. We generalize earli...
10/03/2017

Online unit covering in L_2

Given a set of points P in L_2, the classic Unit Covering (UC) problem a...
08/05/2014

Spoke Darts for Efficient High Dimensional Blue Noise Sampling

Blue noise refers to sample distributions that are random and well-space...

1 Introduction

In this paper, we develop and study efficient schemes for covering and quantization in high-dimensional cubes. In particular, we will demonstrate that the proposed schemes are much superior to the so-called ‘low-discrepancy sequences’. The paper starts with introducing the main notation, then we formulate the main problem of covering a -dimensional cube by Euclidean balls. This is followed by a discussion on the main principles we have adopted for construction of our algorithms. Then we briefly formulate problems of covering a cube by smaller cubes (which are balls in the -norm) and the problem of quantization. Both problems have many similarities with the main problem of covering a cube by balls. At the end of this section, we describe the structure of the remaining sections of the paper and summarize our main findings.

1.1 Main notation

  • : -dimensional space;

  • and : Euclidean and -norms in ;

  • : -dimensional ball of radius centered at ;

  • ;

  • : -dimensional sphere of radius centered at ;

  • : -dimensional cube of side length centered at (it is also the -dimensional ball in the -norm with radius and center );

  • ;

  • .

1.2 Main problem of interest

The main problem discussed in the paper is the following problem of covering a cube by balls. Let be a -dimensional cube, be some points in and be the corresponding balls of radius centered at . The dimension , the number of balls and their radius could be arbitrary.

We are interested in the problem of choosing the locations of the centers of the balls so that the union of the balls covers the largest possible proportion of the cube . That is, we are interested in choosing a scheme (a collection of points) so that

vol (1)

is as large as possible (given , and the freedom we are able to use in choosing ). Here

(2)

and is the proportion of the cube covered by the balls .

For a scheme , its covering radius is defined by CR. In computer experiments, covering radius is called minimax-distance criterion, see johnson1990minimax and pronzato2012design ; in the theory of low-discrepancy sequences, covering radius is called dispersion, see (niederreiter1992random, , Ch. 6). The problem of optimal covering of a cube by balls has very high importance for the theory of global optimization and many branches of numerical mathematics. In particular, the celebrated results of A.G.Sukharev imply that an -point design with smallest CR provides the following: (a) min-max -point global optimization method in the set of all adaptive -point optimization strategies, see (sukharev2012minimax, , Ch.4,Th.2.1), and (b) the -point min-max optimal quadrature, see (sukharev2012minimax, , Ch.3,Th.1.1). In both cases, the class of (objective) functions is the class of Liptshitz functions with known Liptshitz constant.

If is not small (say, ) then computation of the covering radius CR for any non-trivial design is a very difficult computational problem. This explains why the problem of construction of optimal -point designs with smallest covering radius is notoriously difficult, see for example recent surveys toth20172 ; toth1993packing .

If CR, then defined in (1) is equal to 1, and the whole cube gets covered by the balls. However, we are only interested in reaching the values like 0.9, when a large part of the ball is covered. There are two main reasons why we are not interested in reaching the value : (a) practical impossibility of making a numerical checking of the full coverage, if is large enough, and (b) our approximations lose accuracy when closely approaches 1.

If, for a given , we have , then the corresponding coverage of will be called -coverage; the corresponding value of can be called -covering radius. If then the -coverage becomes the full coverage and 1-covering radius of becomes . Of course, for any we can reach by means of increasing . Likewise, for any given  we can reach by sending . However, we are not interested in very large values of and try to get the coverage of the most part of the cube with the radius as small as possible. We will keep in mind the following typical values of and : ; . Correspondingly, we will illustrate our results in such scenarios.

1.3 Two contradictory criteria and a compromise

In choosing , the following two main criteria must be followed:

  • the volumes of intersections of the cube and each individual ball are not very small;

  • the volumes of intersections are small for all .

These two criteria do not agree with each other. Indeed, as shown in Section 2, see formulas (12)–(15), the volume of intersection of the ball and the cube is approximately inversely proportional to and hence criterion (i) favours with small norms. However, if at least some of the points get close to 0, then the distance between these points gets small and, in view of the formulas of Section 6.7, the volumes of intersections get large.

This yields that the above two criteria require a compromise in the rule of choosing as the points

should not be too far from 0 but at the same time, not too close. In particular, and this is clearly demonstrated in many examples throughout the paper, the so-called ‘uniformly distributed sequences of points’ in

, including ‘low-discrepancy sequences’ in , provide poor covering schemes. This is in a sharp contrast with the asymptotic case (and hence ), when one of the recommendations, see (janson1986random, , p.84), is to choose ’s from a uniformly distributed sequence of points from a set which is slightly larger than ; this is to facilitate covering of the boundary of , as it is much easier to cover the interior of the cube than its boundary.

In our considerations, is not very large and hence the radius of balls cannot be small. One of our recommendations for choosing is to choose ’s at random in a cube (with

) with components distributed according to a suitable Beta-distribution. The optimal value of

is always smaller than 1 and depends on and . If is small or is astronomically large, then the optimal value of could be close to 1 but in most interesting instances this value is significantly smaller than 1. This implies that the choice (for example, if ’s form a uniformly distributed sequence of points in the whole cube ) often leads to very poor covering schemes, especially when the dimension is large (see Tables 13 in discussed in Section 3). More generally, we show that for construction of efficient designs , either deterministic or randomized, we have to restrict the norms of the design points . We will call this principle ‘-effect’.

1.4 Covering a cube by smaller cubes and quantization

In Section 4 we consider the problem of -coverage of the cube by smaller cubes (which are -balls). The problem of 1-covering of cube by cubes has attracted a reasonable attention in mathematical literature, see e.g. kuperberg1994line ; januszewski1994line . The problem of cube -covering by cubes happened to be simpler than the main problem of -coverage of a cube by Euclidean balls and we have managed to derive closed-form expressions for (a) the volume of intersection of two cubes, and (b) coverage, the probability of covering a random point in by cubes for a wide choice of randomized schemes of choosing designs . The results of Section 4 show that the -effect holds for the problem of coverage of the cube by smaller cubes in the same degree as for the main problem of Section 3 of covering with balls.

Section 5 is devoted to the following problem of quantization also known as the problem of facility location. Let be uniform on and be an -point design. The mean square quantization error is . In the case where are i.i.d. uniform on , we will derive a simple approximation for the expected value of and clearly demonstrate the -effect. Moreover, we will notice a strong similarity between efficient quantization designs and efficient designs constructed in Section 3.

1.5 Structure of the paper and main results

In Section 2 we derive accurate approximations for the volume of intersection of an arbitrary -dimensional cube with an arbitrary -dimensional ball. These formulas will be heavily used in Section 3, which is the main section of the paper dealing with the problem of -coverage of a cube by balls. In Section 4 we extend some considerations of Section 3 to the problem of -coverage of the cube by smaller cubes. In Section 5 we argue that there is a strong similarity between efficient quantization designs and efficient designs of Section 3. In Appendix A, Section 6, we briefly mention several facts, used in the main part of the paper, related to high-dimensional cubes and balls. In Appendix B, Section 7

, we prove two simple but very important lemmas about distribution and moments of certain random variables.


Our main contributions in this paper are:

  • an accurate approximation (19) for the volume of intersection of an arbitrary -dimensional cube with an arbitrary -dimensional ball;

  • an accurate approximation (27) for the expected volume of intersection of the cube with balls with uniform random centers ;

  • closed-form expression of Section 4.2 for the expected volume of intersection the cube with cubes with uniform random centers ;

  • construction of efficient schemes of quantization and -coverage of the cube by balls;

  • large-scale numerical study.

We are preparing an accompanying paper second_paper in which we will further explore the topics of Sections 3-5 and also consider the problems of quantization and -coverage in the whole space and the problem of -coverage of simplices.

2 Volume of intersection of a cube and a ball

2.1 The main quantity of interest

Consider the following problem. Let us take the cube of volume and a ball centered at a point ; this point could be outside . Denote the fraction of the cube covered by the ball by

(3)

Our aim is to approximate for arbitrary , and . We will derive a CLT-based normal approximation in Section 2.3 and then, using an asymptotic expansion in the CLT for non-identically distributed r.v., we will improve this normal approximation in Section 2.4. In Section 6.8 we consider a more direct approach for approximating

based on the use of characteristic functions and the fact that

is a c.d.f. of , where

is random vector with uniform distribution on

. From this, can be expressed through the convolution of one-dimensional c.d.f’s. Using this approach we can evaluate the quantity with high accuracy but the calculations are rather time-consuming. Moreover, entirely new computations have to be made for different and, therefore, we much prefer the approximation of Section 2.4.

Note that in the special case , several approximations for the quantity have been derived in SIAM but their methods cannot be generalized to arbitrary . Note also that symmetry considerations imply the following relation between and with (when is a vertex of ) and :

2.2 A generalization of the quantity (3)

In the next sections, we will need another quantity which slightly generalizes (3). Assume that we have the cube of volume , the ball with a center at a point . Denote the fraction of the cube covered by the ball by

(4)

Then the following change of the coordinates and the radius

(5)

gives

(6)

2.3 Normal approximation for the quantity (3)

Let be a random vector with uniform distribution on so that are i.i.d.r.v. uniformly distributed on . Then for given and any ,

(7)

That is, , as a function of , is the c.d.f. of the r.v. .

Let have a uniform distribution on and . In view of Lemma 1 of Section 7, the density of the r.v. is

(8)

and

(9)

where is the third central moment: .

For , the density of is

(10)

with expressions (9) for , and not changing.

Consider the r.v.

(11)

From (9), its mean is

(12)

Using independence of , we also obtain from (9):

(13)

and

(14)

If is large enough then the conditions of the CLT for are approximately met and the distribution of is approximately normal with mean

and variance

. That is, we can approximate by

(15)

where

is the c.d.f. of the standard normal distribution:

The approximation (15) has acceptable accuracy if is not very small; for example, it falls inside a

-confidence interval generated by the standard normal distribution; see Figures 

22 as examples. Let

be the quantile of the standard normal distribution defined by

; for example, for . As follows from (12), (13) and the approximation (15), we expect the approximate inequality if

(16)

In many cases discussed in Section 3, the radius does not satisfy the inequality (16) with and even and hence the normal approximation (15) is not satisfactorily accurate; this can be evidenced from Figures 216 below.

In the next section, we improve the approximation (15) by using an Edgeworth-type expansion in the CLT for sums of independent non-identically distributed r.v.

2.4 Improved normal approximation

General expansion in the central limit theorem for sums of independent non-identical r.v. has been derived by V.Petrov, see Theorem 7 in Chapter 6 in

petrov2012sums , see also Proposition 1.5.7 in rao1987asymptotic . The first three terms of this expansion have been specialized by V.Petrov in Section 5.6 in petrov . By using only the first term in this expansion, we obtain the following approximation for the distribution function of :

leading to the following improved form of (15):

(17)

where

(18)

From the viewpoint of Section 3, the range of most important values of from (18) is . For such values of , the uncorrected normal approximation (15) significantly overestimates the values of , see Figures 216 below. The approximation (17) brings the normal approximation down and makes it much more accurate. The other terms in Petrov’s expansion of petrov2012sums and petrov continue to bring the approximation down (in a much slower fashion) so that the approximation (17) still slightly overestimates the true value of (at least, in the range of interesting values of from (18)). However, if is large enough (say, ) then the approximation (17) is very accurate and no further correction is needed.

A very attractive feature of the approximations (15) and (18) is their dependence on through only. We could have specialized for our case the next terms in Petrov’s approximation but these terms no longer depend on only (this fact can be verified from the formula (54) for the fourth moment of the r.v. ) and hence the next terms are much more complicated. Moreover, adding one or two extra terms from Petrov’s expansion to the approximation (17) does not fix the problem entirely for all and . Instead, we propose a slight adjustment to the r.h.s of (17) to improve this approximation, especially for small dimensions. Specifically, we suggest the approximation

(19)

where if the point lies on the diagonal of the cube and for a typical (random) point . For typical (random) points , the values of are marginally smaller than for the points on the diagonal of having the same norm, but the difference is very small. In addition to the points on the diagonal, there are other special points: the points whose components are all zero except for one. For such points, the values of are smaller than for typical points with the same norm, especially for small . Such points, however, are of no value for us as they are not typical and we have never observed in simulations random points that come close to these truly exceptional points.

2.5 Simulation study

In Figures 216 we demonstrate the accuracy of approximations (15), (17) and (19) for in dimensions for the following locations of :

  • , the center of the cube ;

  • , with being a vertex of the cube ;

  • lies on a diagonal of with for all and ;

  • is a random vector uniformly distributed on the sphere with some .

There are figures of two types. In the figures of the first type, we plot over a wide range of ensuring that values of lie in the whole range . In the figures of the second type, we plot over a much smaller range of with lying in the range for some small positive such as . For the purpose of using the approximations of Section 3, we need to assess the accuracy of all approximations for smaller values of and hence the second type of plots are often more insightful. In Figures 214, the solid black line depicts values of computed via Monte Carlo methods, the blue dashed, the red dot-dashed and green long dashed lines display approximations (15), (17) and (19), respectively.

In the case where is a random vector uniformly distributed on a sphere , the style of the figures of the second type is slightly changed to adapt for this choice of and provide more information for which do or do not belong to the cube . In Fig. 16 and Fig. 16, the thick dashed red lines correspond to random points . The thick dot-dashed orange lines correspond to random points such that . Approximations (15) and (17) are depicted in the same manner as previous figures but the approximation (19) is now represented by a solid green line. The thick solid red line displays values of for on the diagonal of with with for and for .

Figure 1: , , .
Figure 2: , , .
Figure 3: , , .
Figure 4: , , .
Figure 5: , is a vertex of , .
Figure 6: , is a vertex of , .
Figure 7: , is a vertex of , .
Figure 8: , is a vertex of , .
Figure 9: is at half-diagonal with
Figure 10: is at half-diagonal,
Figure 11: is at half-diagonal,
Figure 12: is at half-diagonal,
Figure 13: , ,
Figure 14: , ,
Figure 15: , ,
Figure 16: , ,

From the simulations that led to Figures 216 we can make the following conclusions.

  • The normal approximation (15) is quite satisfactory unless the value is small.

  • The accuracy of all approximations improves as grows.

  • The approximation (19) is very accurate even if the values are very small.

  • If is large enough then the approximations (17) and (19) are practically identical and are extremely accurate.

3 Covering a cube by balls

In this section, we consider the main problem of covering the cube by the union of balls as formulated in Section 1.2. We will discuss different schemes of choosing the set of ball centers for given and . The radius will then be chosen to achieve the required probability of covering: . Most of the schemes will involve one or several parameters which we will want to choose in an optimal way.

3.1 The main covering scheme

The following will be our main scheme for choosing .

Scheme 1. are i.i.d. random vectors uniformly distributed in the cube , where is a parameter.

We will formulate several other covering schemes and compare them with Scheme 1. The reasons why we have chosen Scheme 1 as the main scheme are the following.

  • It is easier to theoretically investigate than all other non-trivial schemes.

  • It includes, as a special case when , the scheme which is very popular in practice of Monte-Carlo niederreiter1992random and global random search zhigljavsky2012theory ; zhigljavsky2007stochastic and is believed to be rather efficient (this is not true).

  • Numerical studies provided below show that Scheme 1 with optimal provides coverings which are rather efficient, especially for large ; see Section 3.5 for a discussion regarding this issue.

3.2 Theoretical investigation of Scheme 1

Let be i.i.d. random vectors uniformly distributed in the cube with . Then, for given ,

(20)

where is defined in (2). The main characteristic of interest , defined in (1), the proportion of the cube covered by the union of balls , is simply

(21)

Continuing (20), note that

(22)

where is defined by the formula (4). From (5) and (6) we have where is the quantity defined by (3). This quantity can be approximated in a number of different ways as shown in Section 2. We will compare (15), the simplest of the approximations, with the approximation given in (19). Approximation (15) gives

(23)

whereas approximation (19) provides

(24)

with and

From (45), and Moreover, if is large enough then is approximately normal.

We shall simplify the expression (20) by using the approximation

(25)

which is a good approximation for small values of and moderate values of ; this agrees with the ranges of , and we are interested in.

We can combine the expressions (21) and (20) with approximations (23),(24) and (25) as well as with the normal approximation for the distribution of , to arrive at two final approximations for that differ in complexity. If the original normal approximation of (23) is used then we obtain

(26)

with

If approximation (24) is used, we obtain:

(27)

with

3.3 Simulation study for assessing accuracy of approximations (26) and (27)

In Figures 1822, is represented by a solid black line and has been obtained via Monte Carlo methods. Approximation (26) is indicated by a dashed blue line and approximation (27) is represented by long dashed green lines. All figures demonstrate that approximation (27) is extremely accurate across different dimensions and values of . This approximation is much superior to approximation (26).

Figure 17: and approximations: .
Figure 18: and approximations: .
Figure 19: and approximations: .
Figure 20: andapproximations: .
Figure 21: and approximations: .
Figure 22: and approximations: .

3.4 Other schemes

In addition to Scheme 1, we have also considered the following schemes for choosing .

Scheme 2. ; are i.i.d. random vectors uniformly distributed in the cube .

Scheme 3. are taken from the minimum-aberration fractional factorial design on vertices of the cube .

Scheme 4. are i.i.d. random vectors on with independent components distributed according to Beta-distribution with density (42) with some .

Scheme 5. are i.i.d. random vectors uniformly distributed in the ball .

Scheme 6. are i.i.d. random vectors uniformly distributed on the sphere .

Scheme 7. are taken from a low-discrepancy Sobol’s sequence on the cube .

The rationale behind the choice of these schemes is as follows. By studying Scheme 2, we test the importance of inclusion of 0 into . We propositioned that if we included 0 into , the optimal value of may increase for some of the schemes making them more efficient; this effect has not been detected.

Scheme 3 with optimal is an obvious candidate for being the most efficient. Unlike all other schemes considered, Scheme 3 is only defined for the values of of the form with .

By using Scheme 4, we test the possibility of improving Scheme 1 by changing the distribution of points in the cube . We have found that the effect of distribution is very strong and smaller values of lead to more efficient covering schemes. By choosing small enough, like , we can achieve the average efficiency of covering schemes very close to the efficiency of Scheme 3. Tables 13 contain results obtained for Scheme 4 with and ; if then Scheme 4 becomes Scheme 1.

From Section 6.4, we know that for constructing efficient designs we have to somehow restrict the norms of ’s. In Schemes 5 and 6, we are trying to do this in an alternative way to Schemes 1 and 4.

Scheme 7 is a natural improvement of Scheme 1. As a particular case with , it contains one of the best known low-discrepancy sequences and hence Scheme 7 with serves as the main benchmark with which we compare other schemes. For construction, we have used the R-implementation of the Sobol’s sequences; it is based on joe2008constructing .

For all the schemes excluding Scheme 3, the sequences are nested so that for all ; using the terminology of kuperberg1994line , these schemes provide on-line coverings of the cube. Note that for the chosen values of , Scheme 7 also has some advantage over other schemes considered. Indeed, despite Sobol’s sequences are nested, the values of the form are special for the Sobol’s sequences and for such values of the Sobol’s sequences possess extra uniformity properties that they do not possess for other values of .

3.5 Numerical comparison of schemes

In Tables 13, for Schemes 1,2,4,5,6 we present the smallest values of required to achieve an 0.9-coverage on average. For these schemes, the value inside the brackets shows the average value of required to obtain 0.9-coverage. For Schemes 3 and 7, we give the smallest value of needed for a 0.9-coverage. For these two schemes, the value within the bracket corresponds to the (non random) value of with which we attain such a coverage.

In Figures 2430 we plot as a functions of across a number schemes, and . For these plots we have used the values of provided in Tables 13 such that for Figures 2426 which correspond to Scheme 1 and Scheme 2, the maximum coverage is very close to and the optimal is very close to the values presented in Tables 13. For Figures 2830 the maximum coverage 0.9 is attained with provided in Tables 13. In Figures 2430 the solid green line, long dashed red line, dashed blue line and dot dashed orange line correspond to respectively. The vertical lines on these plots indicate the value of where the maximum coverage is obtained.

Scheme 1 1.632 (0.70) 1.520 (0.78) 1.291 (0.86) 1.195 (0.90)
Scheme 1, 1.720 (1.00) 1.577 (1.00) 1.319 (1.00) 1.215 (1.00)
Scheme 2 1.634 (0.70) 1.520 (0.78) 1.291 (0.86) 1.195 (0.90)
Scheme 3 1.530 (0.44) 1.395 (0.48) 1.115 (0.50) 1.075 (0.50)
Scheme 4, 1.629 (0.58) 1.505 (0.65) 1.270 (0.72) 1.165 (0.75)
Scheme 4, 1.635 (0.80) 1.525 (0.88) 1.310 (1.00) 1.210 (1.00)
Scheme 5 1.645 (1.40) 1.530 (1.50) 1.330 (1.75) 1.250 (1.75)
Scheme 6 1.642 (1.25) 1.532 (1.35) 1.330 (1.50) 1.250 (1.70)
Scheme 7 1.595 (0.72) 1.485 (0.80) 1.280 (0.85) 1.170 (0.88)
Scheme 7, 1.678 (1.00) 1.534 (1.00) 1.305 (1.00) 1.187 (1.00)
Table 1: Values of and (in brackets) to achieve 0.9 coverage for .
Scheme 1 2.545 (0.50) 2.460 (0.55) 2.290 (0.68) 2.205 (0.70)
Scheme 1, 2.840 (1.00) 2.702 (1.00) 2.444 (1.00) 2.330 (1.00)
Scheme 2 2.545 (0.50) 2.460 (0.55) 2.290 (0.68) 2.205 (0.70)
Scheme 3 2.490 (0.32) 2.410 (0.35) 2.220 (0.40) 2.125 (0.44)
Scheme 4, 2.540 (0.44) 2.455 (0.48) 2.285 (0.55) 2.220 (0.60)
Scheme 4, 2.545 (0.60) 2.460 (0.65) 2.290 (0.76) 2.215 (0.78)
Scheme 5 2.550 (1.40) 2.467 (1.60) 2.305 (1.75) 2.235 (1.90)
Scheme 6 2.550 (1.40) 2.467 (1.58) 2.305 (1.75) 2.235 (1.90)
Scheme 7 2.520 (0.50) 2.445 (0.60) 2.285 (0.68) 2.196 (0.72)
Scheme 7, 2.750 (1.00) 2.656 (1.00) 2.435 (1.00) 2.325 (1.00)
Table 2: Values of and (in brackets) to achieve 0.9 coverage for .
Scheme 1 4.130 (0.38) 4.020 (0.45) 3.970 (0.46)
Scheme 1, 4.855 (1.00) 4.625 (1.00) 4.520 (1.00)
Scheme 2 4.130 (0.38) 4.020 (0.45) 3.970 (0.46)
Scheme 3 4.110 (0.21) 4.000 (0.25) 3.950 (0.28)
Scheme 4 4.130 (0.30) 4.020 (0.36) 3.970 (0.40)
Scheme 4