Constructing Low Star Discrepancy Point Sets with Genetic Algorithms

by   Carola Doerr, et al.

Geometric discrepancies are standard measures to quantify the irregularity of distributions. They are an important notion in numerical integration. One of the most important discrepancy notions is the so-called star discrepancy. Roughly speaking, a point set of low star discrepancy value allows for a small approximation error in quasi-Monte Carlo integration. It is thus the most studied discrepancy notion. In this work we present a new algorithm to compute point sets of low star discrepancy. The two components of the algorithm (for the optimization and the evaluation, respectively) are based on evolutionary principles. Our algorithm clearly outperforms existing approaches. To the best of our knowledge, it is also the first algorithm which can be adapted easily to optimize inverse star discrepancies.



page 1

page 2

page 3

page 4


Star Discrepancy Subset Selection: Problem Formulation and Efficient Approaches for Low Dimensions

Motivated by applications in instance selection, we introduce the star d...

Star discrepancy for new stratified random sampling I: optimal expected star discrepancy

We introduce a class of convex equivolume partitions. Expected star disc...

On Negatively Dependent Sampling Schemes, Variance Reduction, and Probabilistic Upper Discrepancy Bounds

We study some notions of negative dependence of a sampling scheme that c...

Discrepancy-based Evolutionary Diversity Optimization

Diversity plays a crucial role in evolutionary computation. While divers...

An algorithm to compute the t-value of a digital net and of its projections

Digital nets are among the most successful methods to construct low-disc...

New Analysis and Algorithm for Learning with Drifting Distributions

We present a new analysis of the problem of learning with drifting distr...

Quasi Monte Carlo inverse transform sampling for phase space conserving Lagrangian particle methods and Eulerian-Lagrangian coupling

This article presents a novel and practically useful link between geomet...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For a point set and a point in , the local star discrepancy of in measures the absolute difference between the volume of the box and the fraction of points of that are inside that box, cf. Figure 1. The star discrepancy of is the largest such value; i.e., .

Figure 1: The local discrepancy of in is .

Star discrepancies are deeply related to the ubiquitous task of numerical integration but have found several applications also in computer graphics, pseudorandom number generators, experimental design, and option pricing, to name but a few domains. The approximation error of Monte Carlo and quasi-Monte Carlo methods can be expressed in terms of the star discrepancy. To be more precise, let be a function for which we want to compute the integral . The Koksma-Hlawka inequality [Hla61, Nie92] states that, for suitable , the difference between this integral and the average function value of on a point set is bounded by


where denotes the so-called variation in the sense of Hardy and Krause. Since depends only on and not on , the smaller the discrepancy , the better approximation we can expect.

While the points in Eq. 1 are chosen randomly in Monte Carlo integration, it has been known for a long time that we can achieve better results by evaluating in low discrepancy point sets. Thus, in quasi-Monte Carlo numerical integration, the set is chosen deterministically, so that the factor in Eq. 1 is as small as possible. It is thus one of the main challenges in numerical analysis to compute explicit point sets of smallest possible star discrepancy value.

1.1 Scope and Previous Related Work

In this work, we present a new algorithm for constructing low discrepancy point sets. In particular, we provide improved upper bounds for the following two questions.

  1. Star discrepancy: For given and , what is the smallest star discrepancy that can be achieved by an -point configuration in ?

  2. Inverse star discrepancy: For given and , what is the smallest possible value of such that there exists a set of size such that ?

Note that the first question asks where to place the points such that the irregularity measured by the star discrepancy is as small as possible, while the second question is even more involved. It asks us to decide how many points we need and (indirectly) where to place them, such that the resulting irregularity is at most .

Our algorithm has originally been developed to address the first question. However, it turns out that, using a bisection approach and multi-objective selection (see Section 3), we can easily adapt it to answer also the second question.

Our algorithm builds on previous work presented in [DRGTL12] on the construction of low -discrepancy sequences (see [DRGTL09] for an earlier GECCO version) and in [GWW12] on the evaluation of the star discrepancy of a given point set. It has two components, an optimization component, which computes candidate point sets, and an evaluation component for assessing the quality of the proposed solutions. During the optimization process, the evaluation component is called several times. The two components of the algorithm will be described in Section 3. On the structure of the algorithms we note here only that the optimization part is based on a genetic algorithm, while the evaluation component is based on threshold accepting (TA)—a variant of simulated annealing with derandomized selection rules.

Evaluating the star discrepancy of a given point set is known to be NP-hard [GSW09]. In fact, it is even W[1]-hard in  [GKWW12], implying that, under standard complexity assumptions, there is no algorithm to evaluate the star discrepancy of points in dimension in a running time . The best known exact algorithm for evaluating discrepancies, the DEM-algorithm, has a running time of  [DEM96]. For most relevant settings this is too slow to be applicable. This is true in particular for our setting, where many candidate point sets need to be evaluated. In fact, the complexity of star discrepancy evaluation is the main reason why only few algorithmic approaches are known for the explicit construction of low star discrepancy point sets, cf. also the comment in [DRGTL12, page 3].

A new robust algorithm to estimate star discrepancy values has been proposed in 

[GWW12]. This algorithm has been reported to give very accurate discrepancy estimates, and our experiments confirm these statements. We thus use this algorithm for the intermediate discrepancy evaluations; i.e., for the optimization process of creating good candidate point configurations. Where feasible, we do a final evaluation of the candidate sets using the exact DEM-algorithm described above.

1.2 Structure of the Paper

An introduction to the star discrepancy problem is given in Section 2

. We discuss there also the issue of computing star discrepancy values, and we introduce the generalized Halton sequence, for which our algorithm finds good generating vectors. The algorithm itself is presented in Section 


Section 4 surveys our empirical results. For the star discrepancy problem we compare our results with those presented in [DGW10, Thi01a, Thi01b]. The point sets computed by our algorithm clearly outperform those reported there. We also show that our algorithm produces surprisingly good bounds for the inverse star discrepancy problem, thus easily answering one of the subproblems in Open Question 42 from [NW10]. Our bounds for all three subproblems are better by a factor (!) of 5 to 7 compared to what was asked for in [NW10], but for two subproblems our values are computed only by the algorithms from [GWW12] and need to be verified by computations with the DEM-algorithm.

2 Star Discrepancies

Throughout this work, we denote by the size of the set under consideration and by the dimension of the problem instance.

For a point the local discrepancy of with respect to is defined by

where denotes the Lebesgue volume of the box and

is the number of points of that fall into this box. The discrepancy of is

the largest local discrepancy value.

2.1 Computation of Star Discrepancies

It has been observed in [Nie72] that the computation of can be discretized. In fact, if we let , the set of -values in the th coordinate (), and be the grid spanned by , then

where is simply the number of points of that fall into the closed box . That is, instead of evaluating the continuous space , the search space can be reduced to a discrete one of size , cf. Figure 2.

Figure 2: The discrepancy of is obtained in one of the grid points .

However, for most practical purposes, function evaluations are way too many, and one has to resort to different methods for evaluating star discrepancy values. As mentioned in the introduction, computing the star discrepancy of a given point set is known to be a hard problem, and the best known exact algorithm, the DEM-algorithm from [DEM96], has a running time of . A new promising algorithm for approximate star discrepancy evaluation has been presented in [GWW12]. We discuss this algorithm in Sec. 3.3.

Given the importance of low star discrepancy point sequences in numerical integration and its various other applications in computer science, it is thus mainly due to the complexity of evaluating star discrepancies that not much work has been done in the search heuristics community to construct low star discrepancy point sequences. On the other hand, there has been some effort to construct low discrepancy point sets for other measures of irregularity, see, for example, the work in [DRGTL12], which constructs point sets that are optimized for Hickernell’s modified -discrepancy—a measure that can be computed efficiently in arithmetic operations. See [DGW13] for a recent survey on the computation of geometric discrepancies.

2.2 Low Star Discrepancy Point Sets

The ultimate goal of star discrepancy theory is to find, for given parameters and , a set of size such that is as small as possible. A closely related question of similar practical and theoretical relevance is the inverse star discrepancy problem: for given and , what is the smallest integer such that there exists a point set of size satisfying .

Since the early twentieth century, a lot of work has been done to address these two questions, see [DP10, Hin12, NW10, Gne12] for recent surveys. Many different low discrepancy sequences have been defined (e.g., Sobol, Faure, and Halton sequences), and theoretical bounds on their performance have been proven. Furthermore, general lower bounds for the star discrepancy of any -point set in dimensions exist. However, the problem with all these bounds (in the interest of space, we cannot give a detailed survey here but we refer to [Gne12] for a concise summary of the known bounds) is that they are either asymptotic statements (and thus of limited relevance in the regime of practical interest) or the precision of the results is not accurate enough to allow for a comparison between the different constructions. We note also that there is a gap between all known lower bounds and the best known upper bounds for the star discrepancy problem. It thus remains a challenging open question to construct such point sets, and it is a well suitable problem for demonstrating the strength of bio-inspired approaches for classical optimization challenges.

In [DRGTL12], the candidates for the -optimized point sets are so-called generalized Halton sequences. Since these point sets exhibit not only low discrepancies, but also low star discrepancies, we use here the same construction for finding our candidate sets. Given the lack of space, we give here only the required definitions of the point sets, and we refer the interested reader to [DRGTL12] and the books [Mat10, Tez95] or the recent survey [DGW13] for further information on generalized Halton sequences.

The Generalized Halton Sequence. The Halton sequence [Hal60] is a generalization of the one-dimensional van der Corput sequence [van35]. For any prime number the th number of this latter sequence in base , is given by


where is the digital expansion of in base ; i.e., .

The multidimensional Halton sequence is simply obtained by grouping together van der Corput sequences of different bases. More precisely, let be the first prime numbers. The th element of the -dimensional Halton sequence is given by

The Halton sequence is a low-discrepancy sequence; i.e., its first points in dimension satisfy the star discrepancy bound


In fact, the Halton sequence was the first construction for which Eq. 3 was verified for any dimension  [Hal60], and up to now there is no sequence known that exhibits a better asymptotic behavior.

Nevertheless, the Halton sequence suffers from strong correlation between the sequences in high dimensions, thus motivating Braaten and Weller [BW79] to suggest a generalized (also referred to as scrambled) Halton sequence . In this sequence, Eq. 2 is replaced by

where is a permutation of with fixpoint (so that ). The th element of the -dimensional generalized Halton sequence is then defined by


where we abbreviate .

To answer the discrepancy question, we thus need to find a vector of permutations such that the star discrepancy of the point set is as small as possible. Since the point set is completely determined by the permutations , we call the generating vector of .

3 Algorithm

The optimization of the generating vectors for the generalized Halton sequence is made with a very simple genetic algorithm. As presented in Algorithm LABEL:algo:opt, a scheme is used. In each generation an offspring population of individuals is generated from the parental population

. Each individual is produced using either mutation or crossover according to some probabilities. The new individuals are evaluated (

in line LABEL:line:evaluation) and the parents reevaluated ( in line LABEL:line:reevaluation) with respect to the discrepancy of the point set that they generate. The selection of the next generation parental population is made based on the fitness of the individuals of both populations and . We use as stopping criterion a fixed number of generations. This section describes in more detail each component of the genetic algorithm.


3.1 Representation

Following a similar procedure as in [DRGTL12], we optimize over the generating vector of the generalized Halton sequence. The difference of our work compared to [DRGTL12] is the objective function—[DRGTL12] considers -discrepancies while we optimize for star discrepancy—and the fact that here in our work, we are interested in generating low star discrepancy point sets, not sequences. This allows us to do the optimization for all the permutations at the same time (since we do not need to find a configuration that would be good also for all smaller number of points). Therefore, we use an adjusted implementation of the algorithm from [DRGTL12].

For the optimization of a -dimensional point set, the genotype of the individuals contains independent permutations. By definition, the first permutation of the generating vector is always since the first prime number is 2 and we require that always. For the same reasons, the second permutation is either or and the third one, is chosen from all possible permutations of with . For technical reasons, the is removed from the permutations in the genotype, and we call the permutation itself (with the prepending 0) the configuration. Finally, the point set is generated by Eq. 4 with the generating vector set to the configuration constructed from the genotype. An example showing the translation from a genotype into a phenotype for a 4-dimensional point set is given in Figure 3.

Figure 3: Representation of the individual’s genotype and its conversion to the phenotype.

3.2 Variations

As seen in Algorithm LABEL:algo:opt, crossover and mutation are used to produce the offspring. These operators, when applied on an individual, affect all underlying permutation vectors of its genotype. The crossover chosen is the partially matched crossover [GL85]; it chooses two crossover points and exchanges the alleles between these two points. The mutation is a uniform partial reordering of the alleles as presented in [DRGTL12]; the reordered alleles are chosen by a uniform matching probability. Both operators preserve the validity of a permutation representation; i.e., the resulting offspring are again permutations of the required length.

3.3 Fitness Evaluation

Naturally, the fitness of the individuals is the star discrepancy of the point set that they generate; i.e., the star discrepancy of their phenotype. We aim at minimizing this value. To assess the discrepancy values we use two different algorithms, the DEM-algorithm proposed in [DEM96] and the TA-algorithm from [GWW12].

The DEM-algorithm is an exact algorithm for computing the star discrepancy of a given point set. It is based on dynamic programming and has a running time of . The algorithm has been implemented by M. Wahlström and is available from [Wah]. We use his implementation to evaluate point sets for up to 9 dimensions. For these point sets, the reevaluation step in line LABEL:line:reevaluation of Algorithm LABEL:algo:opt can be skipped.

Where the exact DEM-algorithm has an infeasible running time, we resort to the TA-heuristic proposed in [GWW12]. This algorithm gives a lower bound for the discrepancy of a point set and we use this lower bound to guide our optimization procedure.

In the interest of space, we cannot give a full description of this TA-approach but provide only a high level overview. The goal of the TA-algorithm is to find a good (i.e., large) lower bound for the star discrepancy of the given point set . To this end, it performs a guided random search on the grid spanned by , cf. Section 2.1. The algorithm uses a

scheme; i.e., it keeps one individual at a time. In each iteration, an offspring is created by sampling according to some probability distribution from the neighborhood of the parent individuum. The local star discrepancy is computed, and the offspring replaces the parent if its local discrepancy value is larger than or at least not much smaller than the parent’s one. The “not much smaller part” is quantified by a threshold value

; i.e., we replace the parent grid point by the offspring point if and only if . The threshold value changes over time, and converges to so that the algorithm finally outputs a local maximum of the star discrepancy value of .

In order to gain precision on our estimation, the TA-algorithm is reapplied on the parental population in each generation (line LABEL:line:reevaluation) and the individual’s fitness is the maximum value of the current estimate and the newly calculated one.

For the final candidate point sets we increase the accuracy of our star discrepancy estimation by performing an additional 50 runs of the TA-algorithm. These values either re-affirmed the previously computed ones, or they give a slightly larger bound on the star discrepancy of the point set, deviating by at most 5 to 10 from the previous values.

We note that for all reference point sets for which we know the exact discrepancy, we also do an exact DEM-evaluation of our candidate point set. Only for combinations of and where no exact discrepancy values are found in the literature, we resort to the lower bounds provided by the TA-algorithm. These lower bounds are at least as good as the ones presented in the literature, since the new TA-evaluation algorithm is better than the ones used in the previous works, cf. [GWW12].

3.4 Selection

When dealing with the star discrepancy question, we use tournaments as selection mechanism [GD90]. Tournament selection selects the best individual among individuals which are selected at random from the entire population.

For the inverse star discrepancy question, we are dealing with two competing objectives: the number of points on the one hand, and the smallest star discrepancy of any -point set on the other. We thus need to resort here to a totally different evaluation and selection scheme. We have used for this problem the NSGA-II [DPAM02], a standard multiobjective selection algorithm. The evaluation is made using a bisection method. For a given configuration, we try to find the smallest number of points for which the discrepancy is lower than the threshold. This method allows us to run only times the discrepancy algorithm for a single configuration, where and are the frontiers of the search for the number of points. The bisection evaluation returns both the discrepancy (which is lower than the threshold) and the number of points. Here again the discrepancy is either computed by the DEM-algorithm (wherever feasible) or by the TA-heuristic (in all other cases).

4 Results

We give a brief overview of the experimental setup in Section 4.1. We then present the results of our algorithm. For the star discrepancy question, we compare the results obtained by our algorithm with those found in the literature (Sec. 4.2). For the inverse discrepancy problem, we are not aware of any paper addressing this question experimentally. We thus present our results for answering a question on the inverse discrepancy presented in [NW10] and [Hin12] (Sec. 4.3).

All point configurations achieving the bounds presented below are available online on [DDRG].

4.1 Experimental Setup

All the experiments were run with the algorithm described in the last section. Specific parameters for the operators are given in Table 1. Given the satisfying results of these settings, we did not attempt to fine tune these parameters. During each discrepancy experiment the 25 best individuals are kept in an archive. Where discrepancy values are computed only by the TA-algorithm and are thus just lower bounds for the exact value, the fitness of the individuals kept in the archive is dynamic as well (i.e., if the value of the individual changes due to a reevaluation, it also changes in the archive). The process is the same for the inverse discrepancy experiments, but instead of a list of the best individuals, the archive contains all the individuals that are not Pareto-dominated by the archive.111We recall that an element is Pareto-dominated by a set if there exists a vector that for all objectives is at least as good as (i.e., for minimization objectives and for objectives to be maximized), and is strictly better ( and , respectively) for at least one objective .

At the end of an experiment, we send to the final evaluation the entire archive and the final population.

Dimensionality 4 – 10 11 – 25 100
Generations 50 100 200
Population Size
Crossover Prob.
Mutation Prob.
Match Prob Mut.
Tournament Size 3 3 3
Table 1: Optimization algorithm parameters.

4.2 Low Discrepancy Point Sets

Tables 2 to 5 compare the discrepancy values obtained with the genetic optimization of the Halton sequence against the results presented in [Thi01a, Thi01b], and [DGW10]. Except for and , all our point sets achieve a lower discrepancy than what has been presented in the literature. In fact, we can observe that our sequences present sometimes a discrepancy that is only half as large as the bounds presented in the literature.

Results from [DGW10] Optimized Halton
5 95
7 65
7 145
9 85
Table 2: Exact discrepancy results for the sequences presented in [DGW10].
Results from [DGW10] Optimized Halton
9* 145
12* 65
12 145
15 65
15 95
15 145
18 95
18 145
20 145
21 95
Table 3: Approximated discrepancy results for the sequences presented in [DGW10]. Lines marked with a star indicate that our final discrepancy measure is exact.
Results from [Thi01a, Thi01b] Optimized Halton
4 125
4 625
5 25
5 125
5 625
6 49
6 343
7 49
8 121
9 121
Table 4: Exact discrepancy results for the sequences presented in [Thi01a, Thi01b].
Results from [Thi01a, Thi01b] Optimized Halton
7* 343
7 2401
10* 121
10 1331
11* 121
12 169
12 2197
15 289
15 4913
20 529
100 101
Table 5: Approximated discrepancy results for the sequences presented in [Thi01a, Thi01b]. Lines marked with a star indicate that our final discrepancy measure is exact.

Figure 4 presents the discrepancy of our optimized point sets against an aggregation of the best point sets in 7 dimensions of [Thi01b, Figure 5]. Again we see that all our values are smaller than those presented there.

Figure 4: Exact discrepancy results on 7 dimension point sets.

The efficiency of our algorithm allows us to do much more than what can be represented by the comparisons with previous work. As an example, we present in the following our results for minimizing the discrepancy in fixed dimension for ranging from 32 to 1024 points, cf. Figures 5 and 6.

Figure 5: Best exact star discrepancy values in to .
Figure 6: Best approximated star discrepancy values in to .

Figure 5 shows for and dimensions how the star discrepancy decreases with growing . The values computed in this figure are exact values, i.e., they are computed with the DEM-algorithm. As expected, we can see that with growing dimension, more points are needed to achieve the same discrepancy bounds.

In Figure 6 we plot similar results for dimension in which the DEM-algorithm is infeasible. The values plotted in this chart are thus computed by 50 runs of the TA-algorithm. The situation in such higher dimensions seems to be very similar to that in the smaller ones.

These plots could be easily extended to more points and to higher dimensions, showing that we can easily get, for any combination of and , a point configuration of low star discrepancy value.

4.3 Inverse Star Discrepancy

The inverse star discrepancy problem is even more complex than the star discrepancy one, since it has an additional parameter, which is the number of points to be placed in the -dimensional unit cube. Given the computational intractability of star discrepancy evaluation, it is therefore natural that not much empirical work has been done to address this question.

Our algorithm, however, can be easily adjusted to compute inverse star discrepancies, as explained in Sec. 3. We use this approach to address Open Problem 42 in [NW10]. The first subproblem asks to construct a point set in 15 dimensions such that and . By an approach suggested by Hinrichs [Hin12], it suffices to construct an -dimensional point set of star discrepancy at most and size . Using his lifting technique, can be turned into a 15-dimensional set of size and discrepancy at most . We are thus interested in the inverse star discrepancy problem with and . By similar arguments (see [Hin12] for the details), we can solve the 15-dimensional problem also by finding a 4-dimensional point configuration of star discrepancy at most or a 5-dimensional point configuration of discrepancy at most with at most points each. As we can see from Table 6, our algorithm easily solves this 15-dimensional problem. For , it outputs a point configuration of discrepancy and size . This is much less than the requested size bound asked for in [NW10, Hin12]. Also the 5-dimensional version of the problem is solved easily; the point configuration has only 172 instead of the requested upper bound of points. Only for the 4-dimensional one the lifting approach of Hinrichs seems too weak to solve the 15-dimensional problem.

We see a similar behavior for the other two subproblems in [NW10]: the original problems (find, (i), a point configuration in of discrepancy and size and, (ii), a configuration in dimensions with and ) can be solved easily for the first two steps of the lifting technique (15 and 10 dimensions, and 25 and 17 dimensions, respectively), but for the last reported step (in 8 and 13 dimensions, respectively) we do not find point configurations meeting the required bound. It is therefore very likely that generalized Halton point sets with such small discrepancy values do not exist.

Table 6 is to be read as follows. The original three subproblems of [NW10, Open Problem 42] are the ones in bold red print. The expected (Exp. ) column presents the number of points required by the problem and by Hinrichs’ method, respectively. The bounds column shows the selected search space for the bisection algorithm. The upper bounds were selected from preliminary experiments with randomly scrambled Halton sequences. The lower bounds have been set so that the maximum number of trials in the bisection search is not too high. As mentioned above, it can be seen that for all problems our algorithm finds a suitable sequence that answers the open problems. We did not complete the computations for and and since the bounds for these instances are already much larger than what is requested. The starred values in the table are computed by 50 individual runs of the TA-algorithm. Before officially declaring [NW10, Open Problem 42] solved, the starred values need to be re-affirmed by an exact algorithm (note that the runtime of the DEM-algorithm would be several years on these instances).

Exp.  Bounds
8 [64,128] 104
5 [128,256] 172
4 [1044,1300] 1048
15* [152,280] 251
10* [472,600] 537
8* - -
25* [422,550] 513
17* [1094,1350] 1239
13* - -
Table 6: Inverse discrepancy results for the three problem instances. Final discrepancy of stared lines are approximated.

Figures 7 to 9 present the Pareto front of the final populations for the 8-, 15-, and 25-dimensional cases.222Note here that the axis are switched, as the objective here is to minimize, for a given maximal star discrepancy value, the number of points . It can be seen that our method achieves a nice diversity in the found point sets giving a broad range of candidates with different trade-offs. The difference between the final front and the reevaluated one can be explained by the fact that the optimization algorithm takes advantage of the errors made by the TA-algorithm. For example, the same sequence can appear multiple times during the evolution, most of the time its approximated discrepancy will be very close to the true one, but it takes only one bad evaluation (the approximation is lower than expected) for that point set to make its way into the archive. Increasing the number of repetitions of the TA-algorithm would prevent this from happening, but it would also increase the running time of the optimization.

Figure 7: 8 dimensional trade-offs between the number of points and the discrepancy found.
Figure 8: 15 dimensional trade-off between the number of points and the discrepancy.
Figure 9: 25 dimensional trade-off between the number of points and the discrepancy.

5 Conclusions

We have presented a new algorithm for computing low star discrepancy point sets. As shown in Section 4, our results outperform previous point configurations. Furthermore, our algorithm can be easily adapted to compute upper bounds for the inverse discrepancy. The point sets are available online at [DDRG]. We are confident that our algorithm and the generated point sets will be useful in a broad range of applications. Most notably, our point sets can ensure a better approximation errors in quasi-Monte Carlo numerical integration and in experimental design.

It would be interesting to study whether our results can be further improved by resorting to other point sets, such as (scrambled) Sobol or Faure configurations.

Another interesting research direction concerns the evaluation of star discrepancy values. As exhibited above, this is a provably difficult task. Still we have seen that the TA-algorithm from [GWW12] provides an accurate estimate wherever this can be checked. Is it possible to design similar heuristics for computing reasonable upper bounds for the star discrepancy of a given point set?


We would like to thank Magnus Wahlström from the Max Planck Institute for Informatics for providing an implementation of the DEM algorithm [DEM96], for his excellent support regarding the evaluation algorithm [GWW12], and for his constructive feedback regarding our work.

We would also like to thank Christian Gagné and Michael Gnewuch for several very helpful discussions on the topic of this work.

Carola Doerr is supported by a Feodor Lynen postdoctoral research fellowship of the Alexander von Humboldt Foundation and by the Agence Nationale de la Recherche under the project ANR-09-JCJC-0067-01.

François-Michel De Rainville is supported by the FQRNT.

All computations have been made possible by the access to Calcul/Compute Québec/Canada supercomputing facilities.


  • [BW79] E. Braaten and G. Weller, An improved low-discrepancy sequence for multidimensional quasi-Monte Carlo integration, J. of Comput. Phys. 33 (1979), no. 2, 249–258.
  • [DDRG] C. Doerr, F.-M. De Rainville, and C. Gagné, A database for low star discrepancy point sets and sequences. Available at
  • [DEM96] D. P. Dobkin, D. Eppstein, and D. P. Mitchell, Computing the discrepancy with applications to supersampling patterns, ACM Trans. Graph. 15 (1996), 354–376.
  • [DGW10] B. Doerr, M. Gnewuch, and M. Wahlström, Algorithmic construction of low-discrepancy point sets via dependent randomized rounding, J. Complexity 26 (2010), 490–507.
  • [DGW13] C. Doerr, M. Gnewuch, and M. Wahlström, Calculation of discrepancy measures and applications, Preprint (2013), Available online on˙Chapter˙Gnewuch˙Wahlstroem˙Winzen.pdf.
  • [DP10] J. Dick and F. Pillichshammer, Digital nets and sequences, Cambridge University Press, 2010.
  • [DPAM02] K. Deb, A. Pratab, S. Agarwal, and T. Meyarivan, A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II, IEEE Trans. Evolut. Comput. 6 (2002), no. 2, 849–858.
  • [DRGTL09] F.-M. De Rainville, C. Gagné, O. Teytaud, and D. Laurendeau,

    Optimizing low-discrepancy sequences with an evolutionary algorithm

    , Proc. of GECCO’09, ACM, 2009, pp. 1491–1498.
  • [DRGTL12]  , Evolutionary optimization of low-discrepancy sequences, ACM Trans. Model. Comput. Simul. 22 (2012), no. 2, 9:1–9:25.
  • [GD90] D. E. Goldberg and K. Deb, A comparative analysis of selection schemes used in genetic algorithms, Proc. of FOGA’90, Morgan Kaufmann, 1990, pp. 69–93.
  • [GKWW12] P. Giannopoulus, C. Knauer, M. Wahlström, and D. Werner, Hardness of discrepancy computation and epsilon-net verification in high dimensions, J. Complexity 28 (2012), 162–176.
  • [GL85] D. E. Goldberg and R. Lingle, Alleles, loci, and the traveling salesman problem, Proc. of ICGA’85, Lawrence Erlbaum Associates, 1985, pp. 154–159.
  • [Gne12] M. Gnewuch, Entropy, randomization, derandomization, and discrepancy, Monte Carlo and Quasi-Monte Carlo Methods 2010 (L. Plaskota and H. Woźniakowski, eds.), Springer-Verlag, 2012, pp. 43–78.
  • [GSW09] M. Gnewuch, A. Srivastav, and C. Winzen, Finding optimal volume subintervals with points and calculating the star discrepancy are NP-hard problems, J. Complexity 25 (2009), 115–127.
  • [GWW12] M. Gnewuch, M. Wahlström, and C. Winzen, A new randomized algorithm to approximate the star discrepancy based on threshold accepting, SIAM J. Numer. Anal. 50 (2012), 781–807.
  • [Hal60] J. H. Halton, On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals, Numerische Mathematik 2 (1960), 84–90.
  • [Hin12] A. Hinrichs, Discrepancy, integration and tractability, Preprint (2012), Available online on
  • [Hla61] E. Hlawka, Funktionen von beschränkter Variation in der Theorie der Gleichverteilung, Ann. Mat. Pura Appl. 54 (1961), 325–333.
  • [Mat10] J. Matoušek, Geometric discrepancy, 2nd ed., Springer, 2010.
  • [Nie72] H. Niederreiter, Discrepancy and convex programming, Ann. Mat. Pura Appl. 93 (1972), 89–97.
  • [Nie92]  , Random number generation and quasi-Monte Carlo methods, SIAM CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 63, SIAM, 1992.
  • [NW10] E. Novak and H. Woźniakowski, Tractability of multivariate problems. vol. 2, standard information for functionals, EMS Tracts in Mathematics, European Mathematical Society, 2010.
  • [Tez95] S. Tezuka, Uniform random number : Theory and practice, The Kluwer International Series in Engineering and Computer Science. 315, Kluwer Academic Publishers, 1995.
  • [Thi01a] E. Thiémard, An algorithm to compute bounds for the star discrepancy, J. Complexity 17 (2001), 850–880.
  • [Thi01b]  , Optimal volume subintervals with points and star discrepancy via integer programming, Math. Meth. Oper. Res. 54 (2001), 21–45.
  • [van35] J. G. van der Corput, Verteilungsfunktionen, Akademie van Wetenschappen, vol. 38, KNAW, 1935, pp. 813–821.
  • [Wah] M. Wahlström, Implementations of the DEM- and the TA-algorithm. Available at