Exploiting Problem Structure in Combinatorial Landscapes: A Case Study on Pure Mathematics Application

12/22/2018 ∙ by Xiao-Feng Xie, et al. ∙ WIOMAX 0

In this paper, we present a method using AI techniques to solve a case of pure mathematics applications for finding narrow admissible tuples. The original problem is formulated into a combinatorial optimization problem. In particular, we show how to exploit the local search structure to formulate the problem landscape for dramatic reductions in search space and for non-trivial elimination in search barriers, and then to realize intelligent search strategies for effectively escaping from local minima. Experimental results demonstrate that the proposed method is able to efficiently find best known solutions. This research sheds light on exploiting the local problem structure for an efficient search in combinatorial landscapes as an application of AI to a new problem domain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

AI techniques have shown their advantages on solving different combinatorial optimization problems, such as satisfiability [Sutton et al.2010, Dubois and Dequen2001, Bjorner and Narodytska2015, Ansótegui et al.2015], traveling salesman problem [Zhang2004], graph coloring [Culberson and Gent2001], job shop scheduling [Watson et al.2003], and automated planning [Bonet and Geffner2001].

These problems can be generalized into the concept of combinatorial landscapes [Reidys and Stadler2002, Schiavinotto and Stützle2007, Tayarani-N and Prugel-Bennett2014], and problem solving can be cast to a search over a space of states. Such problems often are very hard [Cheeseman et al.1991]. In order to pursue an efficient search, it is vital to develop techniques to identify problem features and of exploiting the local search structure [Frank et al.1997, Hoffmann2001, Hoos and Stützle2004]

. In particular, it is important to reduce and decompose the problem preserving structural features that enable heuristic search and with the effective problem size factored out

[Slaney and Walsh2001, Mears and de la Banda2015]. It is also significant to tackle ruggedness [Billinger et al.2014] and neutrality (plateaus) [Collins2006, Benton et al.2010] in the landscapes.

In number theory, a -tuple is admissible if for every prime , where is a strictly increasing sequence of integers, and denotes the number of distinct residue classes modulo occupied by the elements in [Goldston et al.2009]. The objective is to minimize the diameter of , i.e., , for a given .

The early work [Hensley and Richards1974, Gordon and Rodemich1998, Clark and Jarvis2001] to compute narrow admissible tuples has been motivated by the incompatibility of the two long-standing Hardy-Littlewood conjectures.

Admissible sets have been used in the recent breakthrough work to find small gaps between primes. In [Goldston et al.2009], it was proved that any admissible contains at least two primes infinitely often, if satisfies some arithmetic properties. In [Zhang2014], it was proved that a finite bound holds at . The bound was then quickly reduced to [Maynard2015] and [Polymath2014b], and wider ranges of also were obtained on bounded intervals containing many primes [Polymath2014b]. Moreover, admissible sets have been used to find large gaps between primes [Ford et al.2015].

Most of the existing techniques to find narrow admissible tuples are sieve methods [Hensley and Richards1974, Clark and Jarvis2001, Gordon and Rodemich1998, Polymath2014a, Polymath2014b], although a few local optimizations were proposed recently [Polymath2014a, Polymath2014b].

In this paper, we formally model this problem into a combinatorial optimization problem, and design search strategies to tackle the landscape, by utilizing the local search structure. Our solver is systematically tested to show its effectiveness.

2 Search Problem Formulation

For a given , a candidate number set with can be precomputed, and a required prime set , where each prime , can be determined. Each is obtained by selecting the numbers from , and the admissibility is tested using .

Definition 1 (Constraint Optimization Model).

For a given , and given the required and , the objective is to find a number set with the minimal value, subject to the constraints and is admissible.

For convenience, , , and are assumed to be sorted in increasing order. We denote as if , as if it is admissible, and as if it satisfies both of the constraints.

Given and , the following three data structures are defined for facilitating the search:

Definition 2 (Residue Array ).

For , is calculated on : its th row is for .

Definition 3 (Occupancy Matrix ).

is an irregular matrix, in which each row contains elements corresponding to the residue classes modulo . For any given number set , there is for , which means the count of numbers in occupying each residue class modulo .

Definition 4 (Count Array ).

is an array, in which each row gives the count of zero elements in the th row of .

Property 1.

The space requirements for , , are respectively , , and .

Figure 1 gives the example for an admissible set with , the full prime set , for each , and the corresponding and .

Figure 1: An admissible example for with .

and its corresponding and have a few properties:

Property 2 (Admissibility).

is admissible if , .

There is a constraint violation at row if . The total violation count should be 0 for each .

Property 3.

Let , it contains all numbers occupying of , and there is .

Property 4.

For each row , there is .

There are two basic properties based on an admissible :

Property 5 (Offsetting).

For any , is admissible, and there is .

Property 6 (Subsetting).

Any subset of is admissible.

Properties 5 and 6 were observed in [Polymath2014b]. Offsetting can be seen as rotating of residue classes at each row in ; and subsetting does not decrease each row of .

Defining a compact is nontrivial for reducing the size of problem space, which is exponential in .

One plausible way is to let include all numbers in , and set . Let and be the best-so-far lower and upper bounds of the optimal value of . During the search, can be bounded by . However, it appears that is very close to [Polymath2014b], which might still be very large when is big.

Based on Property 4, as is small, the average would be large. For the rows with , it would be difficult to find a useful heuristic for changing the unoccupied column at row . Intuitively, each unoccupied location in can be assumed to be unchanged during the search. Thus, sieving can be applied to remove any numbers in that occupy those unoccupied locations, which could be found using Property 3, for a set of the smallest primes . After sieving, the proportion of remaining numbers is . The completeness of the problem space on the other combinations in can be retained using a sufficiently large , based on the principle behind Property 5, i.e., offsetting as a choice of residue classes.

Remark 1.

The original problem can be converted into a list of subproblems, where each subproblem takes each as the starting point to obtain the minimal diameter for each . The optimal solution is then the best solution among all subproblems.

Decomposition [Friesen and Domingos2015] has been successfully used in AI for solving discrete problems. The new perspective of the problem has two features. First, each subproblem has a much smaller state space, as . Second, for totally around subproblems, the good solutions of neighboring subproblems might share a large proportion of elements, providing a very useful heuristic clue for efficient adaptive search in this dimension.

Let contain all primes . In theory, , but we can reduce it to an effective subset. Based on Property 4, if is large, the average would be small. Some rows of would always have , even for . The set of corresponding primes, named , thus can be removed from , without any loss of the completeness for testing the admissibility. The effective prime set would be .

Algorithm 1 gives the specific realization for obtaining and . Here in Line 1 and in Line 2 are obtained using the setting in the greedy sieving method [Polymath2014b].

0:  ,
1:  ; // Let
2:   = sieving all with for
3:  Obtain and for using
4:  ;
5:  return  ,
Algorithm 1 Obtain and

3 Search Algorithm

In this section, the basic operations on auxiliary data structures are first introduced. Some search operators are then realized. Finally, we describe the overall search algorithm.

3.1 Operations on , and

For every , is calculated in advance. For , the corresponding and are synchronously updated locally. The space requirements are given in Property 1.

There are two elemental 1-move operators, i.e., adding into to obtain , and removing from to obtain , for given and .

Property 7 (Connectivity).

The two elemental 1-move operators possess the connectivity property for each .

The connectivity property [Nowicki and Smutnicki1996] states that there exists a finite sequence of such moves to achieve the optimum state from any state in the search space.

For , there are for each , for each , based on Definitions 3 and 4. The and for any can be constructed by adding each using Algorithm 2. For any two states and , can be changed into by adding each and by removing each . The total number of 1-moves is , i.e., which can be seen as the distance [Reidys and Stadler2002] between two states. The shorter the distance, the more similar the two states are.

Algorithms 2 and 3 respectively give the operations of updating and for the two elemental 1-move operators.

0:  , ,
1:  for  do
2:     ;
3:     if    then  
4:  end for
5:  return  ,
Algorithm 2 Update and as adding into
0:  , ,
1:  for  do
2:     ;
3:     if    then  
4:  end for
5:  return  ,
Algorithm 3 Update and as removing from
0:  , // Include and corresponding and
1:  
2:  for  do
3:     if    and    then  
4:  end for
5:  return    // The change of the violation count
Algorithm 4 VioCheck: Get the change of the violation count
Property 8 (Time Complexity).

Algorithms 2 and 3 have the time complexity in updating and .

In the following realizations, we will focus on the search transitions between states. The admissibility testing (Property 2) on each is not explicitly applied. Instead, VioCheck in Algorithm 4 is used to check , i.e., the violation count to be increased, if adding into , using , and .

3.2 Search Operators

We first realize some elemental and advanced search operators to provide the transitions between admissible states.

3.2.1 Side Operators

Let ={Left, Right} define the two mutually reverse sides of . For an admissible state , each side operator tries to add or remove a number at the given side of to obtain the admissible tuple with a diameter as narrow as possible.

SideRemove just removes the element at the given from , and its output is admissible, according to Property 6.

Algorithm 5 defines the operation SideAdd for adding a number into . To retain the admissibility, the number to be added is tested using Algorithm 4 to ensure the admissibility.

0:  , // Include {} and corresponding and
1:  if  Left then    else  // Get side value
2:   = GetIndex // Obtain the index of in
3:  while  do
4:     if   Left  then     else  
5:     VioCheck(, ) // Use Alg. 4 to add the th number in
6:     if    return   // Ensure the admissibility
7:  end while
8:  return    // The original is unchanged
Algorithm 5 SideAdd: For adding a number into

3.2.2 Repair Operator

The Repair operator repairs into using side operators: While , the SideAdd operator is iteratively applied on each side of , and the better one is kept; While , the SideRemove operator is iteratively applied on each side of , and the better one is kept. Finally, is obtained as .

3.2.3 Shift Search

Algorithm 6 gives the realization of the ShiftSearch operator. The side is selected at random (Line 2). Each shift [Polymath2014a] is realized by combining SideRemove and SideAdd (Line 4), leading to a distance of 2 to the original state. Starting from , we applies totally up to shifts (Line 3) unless SideAdd fails (Line 5), and the best state is kept as (Line 6). The state is accepted immediately if

, or with a probability otherwise (Line 8), following the same principle as in simulated annealing

[Kirkpatrick et al.1983].

0:    // Parameter: ,
1:  ; ; ; ;
2:  =RND({Left, Right}) Reverse of
3:  for  do
4:     SideRemove(, ); =SideAdd(, )
5:     if    break  // Stop search if SideAdd fails
6:     if    then ;
7:  end for
8:  if    or  RND(0, 1)  return
9:  return    // The original is unchanged
Algorithm 6 ShiftSearch: Combine side moves on

3.2.4 Insert Moves

Algorithm 7 gives the realization of the InsertMove operator to work on the input for obtaining an admissible output.

The operator is realized in three levels, as defined by the parameter . For each in a compact set (Line 2), the violation count is calculated using Algorithm 4 (Line 3). At level 0, the value is immediately inserted into if (Line 4). Otherwise, if and , the violation row is found using VioRow (Line 5), which is simply realized by returning as the conditions are satisfied at Line 3 of Algorithm 4, and then is stored into the set (Line 5) starting from (Line 1).

At levels 1 and 2, we compare and , where is the second best location in row of . Based on Property 3, . Note that the admissibility is retained after adding elements in and removing elements in .

Remark 2.

For =InsertMove(), is non-increasing at all levels. is respectively increased by 1 and at levels 0 and 1, and keeps unchanged at level 2.

In general, InsertMove is successful if it can increase . However, the neighborhood might contains too many infeasible moves, as many 1-moves would trigger multiple violations. It might be inefficient to use systematic adjustments [Polymath2014a]. Our implementation targets on feasible moves intelligently by utilizing the violation check clues.

3.2.5 Local Search

Algorithm 8 gives the realization of the LocalSearch operator. We will only focus on the case of improving the input state with . Let the input have . The SideRemove operator is first applied for times (Lines 1-3). Its output has , and . The InsertMove operator is then applied for up to times (Lines 4-6). Based on Remark 1, the output has . If this step leads to , the final output after repairing definitely has a lower diameter than . Otherwise, it is still possible to produce a better output as the state is being repaired (Line 7).

3.3 Region-based Adaptive Local Search (RALS)

The region-based adaptive local search (RALS) is realized to tackle the problem decomposition as described in Remark 1.

Let us consider the problem along the dimension of the numbers in . For each , it can be indexed by . Let be the optimal diameter for all with , we can form a set of points . It can be seen as a one-dimensional fitness landscape representing the fitness function on the discrete variable from . Note that the optimal solution on this fitness landscape is the optimal solution of the original problem.

Essentially, we would like to focus the search on those promising regions where has higher quality. Nevertheless, early search can provide some clues for narrowing down promising regions, even though the fitness landscape itself is not explicit at the beginning, as at each can be revealed through extensive local search.

0:   // Parameter:
1:  Initialize // Use as
2:  for  do
3:     VioCheck(, ) // Algorithm 4
4:     if    return   // Level 0: Insert one number
5:     if    then  ;
6:  end for
7:  for   and   do
8:     if    return   // Level 1
9:  end for
10:  for   and   (In Random Order) do
11:     if    return  
12:  end for
13:  return    // The original is unchanged
Algorithm 7 InsertMove: Local moves in of
0:  , ,  // Parameters: ,
1:  for  do
2:     =RND({Left,Right}); =SideRemove(, )
3:  end for
4:  for  do
5:     InsertMove();  if    break
6:  end for
7:  return  Repair()
Algorithm 8 LocalSearch: Remove & insert to improve

3.3.1 Database Management

We use a simple database, denoted by DB, to keep the high-quality solutions found during the search, and index each of them as , where , and for each . For each , only the best-so-far solution and the corresponding is kept. Here plays the role of a virtual fitness function that is updated during the search process to approximate the real fitness function .

The database is managed in a region-based mode. Specifically, the total range of the numbers in is divided into regions. There are three basic operations for DB.

The DBInit operator is used for providing the initialization. The greedy sieve [Polymath2014b] is applied to generate a state in each region for forming the initial .

The DBSelect operator is used for selecting one incumbent state to apply the search operation. In the region-based mode, there are two steps to provide the selection. In the first step, each region provides one candidate. In this paper, we greedily choose the best solution in each region. In the second step, the incumbent state is selected from the candidates provided by all regions. We consider the following implementation. At the probability , the candidate is selected at random. Otherwise, tournament selection is applied to select the best solution among totally randomly chosen candidates.

The DBSave operator simply stores each into DB, and updates internally. Dominated solutions are discarded.

3.3.2 Algorithm Realization

Algorithm 9 gives the implementation of RALS to obtain for a given . First, and are initialized using Algorithm 1, using for the range (Line 1) to ensure . Afterward, the database DB with regions is initiated using the DBInit operator (Line 2).

The search process runs iterations in total. In each iteration, we first select one incumbent solution from DB using the DBSelect operator (Line 4). Then the actual search tackles two parts of the problem. The ShiftSearch operator is used to search on the virtual fitness landscape (Line 5). The LocalSearch operator is then applied to improve locally (Lines 6-7). For each search operator in Lines 5-7, the DBSave operator is applied to store newly generated solutions. Finally, the best solution in DB is returned.

1:  Intialize , using Algorithm 1 // 
2:  DB=DBInit() // Initiate DB with regions
3:  for  do
4:     DBSelect(DB) // Select one incumbent solution from DB
5:     ShiftSearch(); DBSave(, DB)
6:     LocalSearch(, 1, ); DBSave(, DB)
7:     LocalSearch(, 2, ); DBSave(, DB)
8:  end for
9:  return   in DB // Return the best solution stored in DB
Algorithm 9 RLAS algorithm to obtain for a given

4 Results and Discussion

We now turn to the empirical evaluation of the proposed algorithm. For the benchmark instances, we refer to an online database [Sutherland2015] that has been established and extensively updated to contain the narrowest admissible -tuples known for all . The algorithm is coded in Java, and our experiments were run on AMD 4.0GHz CPU. For each instance, 100 independent runs were performed.

4.1 Results by Existing Methods

Most existing techniques to solve this problem are constructive and sieve methods [Polymath2014a]. The sieve methods are realized by sieving an integer interval of residue classes modulo primes and then selecting an admissible -tuple from the survivors. The easiest way to construct a narrow is using the first primes past [Zhang2014]. As an optimization, the sieve of Eratosthenes takes consecutive primes, to search starting from , in order to select one among the admissible tuples that minimize the diameter.

The Hensley-Richards sieve [Hensley and Richards1974] uses a heuristic algorithm to sieve the interval to obtain , leading to the upper bound [Polymath2014b]:

The Schinzel sieve, as also considered in [Gordon and Rodemich1998, Clark and Jarvis2001]

, sieves the odd rather than even numbers. In the shifted version

[Polymath2014b], it sieves an interval of odd integers and multiples of odd primes , where is sufficient large to ensure at least survivors, and is sufficient large to ensure that the survivors form , is the starting point to choose for yielding the smallest final diameter.

As a further optimization, the shifted greedy sieve [Polymath2014b] begins as in the shifted Schinzel sieve, but the minimally occupied residue class are greedily chosen to sieve for primes , where is a constant. Empirically, it appears to achieve the bound [Polymath2014a]:

Table 1 lists the upper bounds obtained by applying a set of existing techniques, including primes past , Eratosthenes (Zhang) sieve, Hensley-Richards sieve, Schinzel and Shifted Schinzel sieve, by running the code111http://math.mit.edu/~drew/ompadm_v0.5.tar provided in [Polymath2014b], on . The best known results are retrieved from [Sutherland2015].

Table 1: Upper bounds on by existing sieve methods.
Table 2: Upper bounds on by different RALS versions.
(a)
(b)
(c)
(d)
(e)
Figure 2: Snapshot of the virtual fitness landscape , taking as the start element of admissible -tuples.

4.2 Results by RALS algorithm

Table  2 lists the average results of different versions of the proposed RALS algorithm. The “BaseVer” version is defined with the following settings. For the database DB, we use . For its DBSelect operator, there are and . For the search loop, we consider iterations. For the ShiftSearch operator, we set and . For the InsertMove operator, there is . For the parameters of LocalSearch in Algorithm 9, we set and . The other versions are then simply the “BaseVer” version with different parameters.

With , the algorithm returns the best results obtained by the shifted greedy sieve [Polymath2014a] in the regions. The results are significantly better than the sieve methods in Table 1. The search operators in RALS show their effectiveness as all RALS versions with perform significantly better than the version with .

Note that “BaseVer” has , we can compare the RALS versions with different levels in the InsertMove operator of LocalSearch. On the performance, the version with a higher level produces better results than that of a lower level. With greedy search only, the first LocalSearch works as an efficient contraction process [Polymath2014a]. As described in Remark 2, InsertMove performs greedy search at levels 0 and 1, but performs plateau moves at level 2, from the perspective of updating . At level 0, the search performs elemental 1-moves. At level 1, the search can be in a very large neighborhood although it has a low time complexity. Plateau moves is used at level 2 to find exits, as remaining feasible moves are more difficult to check. Finding exits to leave plateaus [Hoffmann2001, Frank et al.1997] has been an important research topic about the local search topology on many combinatorial problems [Bonet and Geffner2001, Benton et al.2010, Sutton et al.2010]. From the viewpoint of the LocalSearch operator, the plateau moves on the part solved by InsertMove help escaping from local mimina in the landscape of the subproblem.

Table 3: Results of “BaseVer” with .

We compare the RALS versions with different , as “BaseVer” has . Especially for , the improvements of using higher are extremely significant as increases from 100 to 500, but are less significant as further increases to 1000.

In “BaseVer”, the second LocalSearch in Line 7 of Algorithm 9 is actually not used if it has . As we increase to , the instance can be fully solved to the best known solution, and the instance can also be solved to obtain a significantly better result.

Lines 1-3 of Algorithm 8 might be viewed as perturbation, an effective operator in stochastic local search [Hoos and Stützle2004] to escape from local minima on rugged landscapes [Tayarani-N and Prugel-Bennett2014, Billinger et al.2014]. In RALS, the second LocalSearch essentially applies a larger perturbation than the first LocalSearch.

Table 2 also gives the comparison for RALS with for selecting the incumbent state in DBSelect. The larger the , the more random the selection is. The best performance is achieved as , neither too greedy nor too random. We can gain some insights from a typical snapshot of the virtual fitness landscape , as shown in Figure 2. It is easy to spot the valley with high-quality solutions, as they provide significant clues for adaptive search. Meanwhile, the noises on the fitness landscape might reduce the effectiveness of pure greedy search. Thus, there is a trade-off between greedy and random search.

Table 4: New upper bounds on for .

Tables 3 lists the performance measures, including the average, the successful rate of finding best known solutions (SuccRate), and the calculating time, by the “BaseVer” version with both and , as and . This version achieves high SuccRate for , and moderate SuccRate for , as . It also reaches reasonable good SuccRate as , with a lower execution time.

Finally, we apply RLAS to compare the results for in [Sutherland2015]. In Table 4, we list the new upper bound and the improvement on the diameter for each of the 48 instances. Eight instances among them have . Thus, AI-based methods might make further contributions to pure mathematics applications.

5 Conclusions

We presented a region-based adaptive local search (RALS) method to solve a case of pure mathematics applications for finding narrow admissible tuples. We formulated the original problem into a combinatorial optimization problem. We showed how to exploit the local search structure to tackle the combinatorial landscape, and then to realize search strategies for adaptive search and for effective approaching to high-quality solutions. Experimental results demonstrated that the method can efficiently find best known and new solutions.

There are several aspects of this work that warrant further study. A deeper analysis might be applied to better identify properties of the local search topology on the landscape. One might also apply advanced AI strategies, e.g., algorithm portfolios [Gomes and Selman2001] and SMAC [Hutter et al.2011], to obtain an even greater computational advantage.

References

  • [Ansótegui et al.2015] C. Ansótegui, F. Didier, and J. Gabas. Exploiting the structure of unsatisfiable cores in MaxSAT. In IJCAI, pages 283–289, 2015.
  • [Benton et al.2010] J. Benton, K. Talamadupula, P. Eyerich, et al. G-value plateaus: A challenge for planning. In ICAPS, pages 259–262, 2010.
  • [Billinger et al.2014] S. Billinger, N. Stieglitz, and T. R. Schumacher. Search on rugged landscapes: An experimental study. Organization Science, 25(1):93–108, 2014.
  • [Bjorner and Narodytska2015] N. Bjorner and N. Narodytska. Maximum satisfiability using cores and correction sets. In IJCAI, pages 246–252, 2015.
  • [Bonet and Geffner2001] B. Bonet and H. Geffner. Planning as heuristic search. Artificial Intelligence, 129(1):5–33, 2001.
  • [Cheeseman et al.1991] P. Cheeseman, B. Kanefsky, and W. M. Taylor. Where the really hard problems are. In IJCAI, pages 331–340, 1991.
  • [Clark and Jarvis2001] David Clark and Norman Jarvis. Dense admissible sequences. Mathematics of Computation, 70(236):1713–1718, 2001.
  • [Collins2006] M. Collins. Finding needles in haystacks is harder with neutrality. Genetic Programming and Evolvable Machines, 7(2):131–144, 2006.
  • [Culberson and Gent2001] J. Culberson and I. Gent. Frozen development in graph coloring. Theoretical Computer Science, 265(1):227–264, 2001.
  • [Dubois and Dequen2001] O. Dubois and G. Dequen. A backbone-search heuristic for efficient solving of hard 3-SAT formulae. In IJCAI, pages 248–253, 2001.
  • [Ford et al.2015] K. Ford, J. Maynard, and T. Tao. Chains of large gaps between primes. arXiv:1511.04468, 2015.
  • [Frank et al.1997] J. Frank, P. Cheeseman, and J. Stutz. When gravity fails: Local search topology. Journal of Artificial Intelligence Research, 7:249–281, 1997.
  • [Friesen and Domingos2015] A. L. Friesen and P. Domingos. Recursive decomposition for nonconvex optimization. In IJCAI, pages 253–259, 2015.
  • [Goldston et al.2009] D. A. Goldston, J. Pintz, and C. Y. Yíldírím. Primes in tuples I. Annals of Mathematics, 170(2):819–862, 2009.
  • [Gomes and Selman2001] C. P. Gomes and B. Selman. Algorithm portfolios. Artificial Intelligence, 126(1):43–62, 2001.
  • [Gordon and Rodemich1998] D. Gordon and G. Rodemich. Dense admissible sets. In International Symposium on Algorithmic Number Theory, pages 216–225, 1998.
  • [Hensley and Richards1974] D. Hensley and I. Richards. Primes in intervals. Acta Arithmetica, 4(25):375–391, 1974.
  • [Hoffmann2001] J. Hoffmann. Local search topology in planning benchmarks: An empirical analysis. In IJCAI, pages 453–458, 2001.
  • [Hoos and Stützle2004] H. H. Hoos and T. Stützle. Stochastic Local Search: Foundations & Applications. Elsevier, 2004.
  • [Hutter et al.2011] F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In LION, pages 507–523. 2011.
  • [Kirkpatrick et al.1983] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, et al. Optimization by simulated annealing. Science, 220(4598):671–680, 1983.
  • [Maynard2015] J. Maynard. Small gaps between primes. Annals of Mathematics, 181(1):383–413, 2015.
  • [Mears and de la Banda2015] C. Mears and M. G. de la Banda. Towards automatic dominance breaking for constraint optimization problems. In IJCAI, pages 360–366, 2015.
  • [Nowicki and Smutnicki1996] E. Nowicki and C. Smutnicki. A fast taboo search algorithm for the job shop problem. Management science, 42(6):797–813, 1996.
  • [Polymath2014a] D. H. J. Polymath.

    New equidistribution estimates of Zhang type.

    Algebra & Number Theory, 8(9):2067–2199, 2014.
  • [Polymath2014b] D. H. J. Polymath. Variants of the selberg sieve, and bounded intervals containing many primes. Research in the Mathematical Sciences, 1(1):1–83, 2014.
  • [Reidys and Stadler2002] C. Reidys and P. Stadler. Combinatorial landscapes. SIAM Review, 44(1):3–54, 2002.
  • [Schiavinotto and Stützle2007] T. Schiavinotto and T. Stützle. A review of metrics on permutations for search landscape analysis. Computers & Operations Research, 34(10):3143–3153, 2007.
  • [Slaney and Walsh2001] J. Slaney and T. Walsh. Backbones in optimization and approximation. In IJCAI, pages 254–259, 2001.
  • [Sutherland2015] A. V. Sutherland. Narrow Admissible Tuples. http://math.mit.edu/~primegaps, 2015.
  • [Sutton et al.2010] A. Sutton, A. Howe, and L. Whitley. Directed plateau search for MAX-k-SAT. In Annual Symposium on Combinatorial Search, pages 90–97, 2010.
  • [Tayarani-N and Prugel-Bennett2014] M.-H. Tayarani-N and A. Prugel-Bennett. On the landscape of combinatorial optimization problems.

    IEEE Transactions on Evolutionary Computation

    , 18(3):420–434, 2014.
  • [Watson et al.2003] J.-P. Watson, J. Beck, A. Howe, and L. Whitley. Problem difficulty for tabu search in job-shop scheduling. Artificial Intelligence, 143(2):189–217, 2003.
  • [Zhang2004] W. Zhang. Phase transitions and backbones of the asymmetric traveling salesman problem. Journal of Artificial Intelligence Research, 21:471–497, 2004.
  • [Zhang2014] Y. Zhang. Bounded gaps between primes. Annals of Mathematics, 179(3):1121–1174, 2014.