Efficient Hill-Climber for Multi-Objective Pseudo-Boolean Optimization

Local search algorithms and iterated local search algorithms are a basic technique. Local search can be a stand along search methods, but it can also be hybridized with evolutionary algorithms. Recently, it has been shown that it is possible to identify improving moves in Hamming neighborhoods for k-bounded pseudo-Boolean optimization problems in constant time. This means that local search does not need to enumerate neighborhoods to find improving moves. It also means that evolutionary algorithms do not need to use random mutation as a operator, except perhaps as a way to escape local optima. In this paper, we show how improving moves can be identified in constant time for multiobjective problems that are expressed as k-bounded pseudo-Boolean functions. In particular, multiobjective forms of NK Landscapes and Mk Landscapes are considered.


page 1

page 2

page 3

page 4


When Gravity Fails: Local Search Topology

Local search algorithms for combinatorial search problems frequently enc...

Representing fitness landscapes by valued constraints to understand the complexity of local search

Local search is widely used to solve combinatorial optimisation problems...

Run Time Analysis for Random Local Search on Generalized Majority Functions

Run time analysis of evolutionary algorithms recently makes significant ...

A Local-Search Based Heuristic for the Unrestricted Block Relocation Problem

The unrestricted block relocation problem is an important optimization p...

Towards a Stronger Theory for Permutation-based Evolutionary Algorithms

While the theoretical analysis of evolutionary algorithms (EAs) has made...

Runtime Analysis for Permutation-based Evolutionary Algorithms

While the theoretical analysis of evolutionary algorithms (EAs) has made...

Bacteria Foraging Algorithm with Genetic Operators for the Solution of QAP and mQAP

The Bacterial Foraging Optimization (BFO) is one of the metaheuristics a...

1 Introduction

Local search and iterated local search algorithms [8] start at an initial solution and then search for an improving move based on a notion of a neighborhood of solutions that are adjacent to the current solution. This paper will consider -bounded pseudo-Boolean functions, where the Hamming distance 1 neighborhood is the most commonly used local search neighborhood.

Recently, it has been shown that the location of improving moves can be calculated in constant time for the Hamming distance 1 “bit flip” neighborhood [16]. This has implications for both local search algorithms as well as simple evolutionary algorithms such as the (1+1) Evolution Strategy. Since we can calculate the location of improving moves, we do not need to enumerate neighborhoods to discover improving moves.

Chicano et al. [3] generalize this result to present a local search algorithm that explore the solutions contained in a Hamming ball of radius around a solution in constant time. This means that evolutionary algorithms need not use mutation to find improving moves; either mutation should be used to make larger moves (that flip more than bits), or mutation should be used to enable a form of restarts. It can also makes crossover more important. Goldman et al. [6] combined local search that automatically calculates the location of improving moves in constant time with recombination to achieve globally optimal results on relatively large Adjacent NK Landscape problems (e.g. 10,000 variables).

Whitley [15] has introduced the notion of Mk Landspaces to replace NK Landscapes. Mk Landscapes are -bounded pseudo-Boolean optimization problems composed of a linear combination of subfunctions, where each subfunction is a pseudo-Boolean optimization problem defined over variables. This definition is general enough to include NK landscapes, MAX-kSAT, as well as spin glass problems.

In this paper, we extend these related concepts to multi-objective optimization. We define a class of multi-objective Mk Landscapes and show how these generalize over previous definitions of multi-objective NK Landscapes. We also show how exact methods can be used to select improving moves in constant time. In the multi-objective space, the notion of an “improving move” is complex because improvement can be improvement in all objectives, or improvement in only part of the objectives. When there are improvement in all objectives, then clearly the improvement should be accepted. However, when there are improvement in only a subset of objectives, it is less clear what moves should be accepted because it is possible for search algorithms to cycle and to visit previously discovered solutions. Methods are proposed that allow the identification of improving moves in constant time for multi-objective optimization. Methods are also proposed to prevent local search algorithms from cycling and thus repeatedly revisiting previously discovered solutions. The results of this work could also be introduced in existing local search algorithms for multi-objective optimization, like Anytime Pareto Local Search [5].

The rest of the paper is organized as follows. In the next section we introduce Multi-objective pseudo-Boolean optimization problems. Section 3

defines the “Scores” of a solution. The Score vector tracks changes in the evaluation function and makes it possible to track the locations of improving moves. An algorithm is introduced to track multiple Scores and to efficiently update them for multi-objective optimization. Section 

4 considers how to address the problems of selecting improving moves in a multi-objective search space when the move only improves some, but not all, of the objectives. Section 5 empirically evaluates the proposed algorithms. Section 6 summarizes the conclusions and outline the potential for future work.

2 Multi-Objective Pseudo-Boolean Optimization

In this paper we consider pseudo-Boolean vector functions with -bounded epistasis, where the component functions are embedded landscapes [7] or Mk Landscapes [15]. We will extend the concept of Mk Landscapes to the multi-objective domain and, thus, we will base our nomenclature in that of Whitley [15].

Definition 1 (Vector Mk Landscape)

Given two constants and , a vector Mk Landscape is a -dimensional vector pseudo-Boolean function defined over whose components are Mk Landscapes. That is, each component can be written as a sum of subfunctions, each one depending at most on input variables111In general, we will use boldface to denote vectors in , as , but we will use normal weight for vectors in , like .:


where the subfunctions depend only on components of .

This definition generalizes that of Aguirre and Tanaka [1] for MNK Landscapes. In Figure 1(a) we show a vector Mk Landscape with dimensions. The first objective function, , can be written as the sum of 5 subfunctions, to . The second objective function, , can be writte as the sum of 3 subfunctions, to . All the subfunctions depend at most on variables.

It could seem that the previous class of functions is restrictive because each subfunction depends on a bounded number of variables. However, every compressible pseudo-Boolean function can be transformed in polynomial time into a quadratic pseudo-Boolean function (with [12].

A useful tool for the forthcoming analysis is the co-ocurrence graph [4] , where is the set of Boolean variables and contains all the pairs of variables that co-occur in a subfunction for any and (both variables are arguments of the subfunction). In other terms, two variables and co-occur if there exists a subfunction mask where the -th and -th bits are 1. In Figure 1 we show the subfunctions of a vector Mk Landscape with -bounded epistasis and its corresponding variable co-occurrence graph.

(a) Vector Mk Landscape

(b) Co-occurrence graph
Figure 1: A vector Mk Landscape with , variables and dimensions (top) and its corresponding co-occurrence graph (bottom).

We will consider, without loss of generality, that all the objectives (components of the vector function) are to be maximized. Next, we include the definition of some standard multi-objective concepts to make the paper self-contained.

Definition 2 (Dominance)

Given a vector function , we say that solution dominates solution , denoted with , if and only if for all and there exists such that . When the vector function is clear from the context, we will use instead of .

Definition 3 (Pareto Optimal Set and Pareto Front)

Given a vector function , the Pareto Optimal Set is the set of solutions that are not dominated by any other solution in . That is:


The Pareto Front is the image by of the Pareto Optimal Set: .

Definition 4 (Set of Non-dominated Solutions)

Given a vector function , we say that a set is a set of non-dominated solutions when there is no pair of solutions where , that is, .

Definition 5 (Local Optimum [11])

Given a vector function , and a neighborhood function , we say that solution is a local optimum if it is not dominated by any other solution in its neighborhood: .

3 Moves in a Hamming Ball

We can characterize a move in by a binary string having 1 in all the bits that change in the solution. Following [3] we will extend the concept of Score222What we call Score here is also named -evaluation by other authors [13]. to vector functions.

Definition 6 (Score)

For , and a vector function , we denote the Score of with respect to move as , defined as follows:


where denotes the exclusive OR bitwise operation (sum in ).

The Score is the change in the vector function when we move from solution to solution , that is obtained by flipping in all the bits that are 1 in . Our goal is to efficiently decide where to move from the current solution. If possible, we want to apply improving moves to our current solution. While the concept of “improving” move is clear in the single-objective case (an improving move is one that increases the value of the objective function), in multi-objective optimization any of the component functions could be improving, disimproving or neutral. Thus, we need to be more clear in this context, and define what we mean by “improving” move. It is useful to define two kinds of improving moves: the weak improving moves and the strong improving moves. The reason for this distinction will be clear in Section 4.

Definition 7 (Strong and Weak Improving Moves)

Given a solution , a move and a vector function , we say that move is a weak improving move if there exists such that . We say that move is a strong improving move if it is a weak improving move and for all .

Using our definition of Score, we can say that a move is a weak improving move if there exists a for which . It is a strong improving move if for all and there exists a for which .

From Definition 7 it can be noticed that if is a strong improving move in then , that is, the concept of strong improving move coincides with that of dominance. It can also be noticed that in the single-objective case, , both concepts are the same. Strong improving moves are clearly desirable, since they cannot be disimproving for any objective and they will improve at least one. Weak improving moves, on the other hand, improve at least one objective but could disimprove other ones.

In particular, if is a weak, but not strong, improving move in solution , then it will improve at least one objective, say -th, and disimprove at least another one, say -th. If this move is taken, in the new solution, , the same move will be again a weak, but not strong, improving move. However, now will improve (at least) the -th objective and will disimprove (at least) -th. Taking again in will lead to , and the algorithm cycles. Thus, any hill climber taking weak improving moves should include a mechanism to avoid cycling.

Scores are introduced in order to efficiently identify where the (weak or strong) improving moves are. For this purpose, we can have a data structure where all the improving moves can be accessed in constant time. As the search progresses the Score values change and they also move in the data structure to keep improving moves separated from the rest. A naïve approach to track all improving moves in a Hamming Ball of radius around a solution would require to store all possible Scores for moves with , where denotes the number of 1 bits in .

If we naively use equation (3) to explicitly update the scores, we will have to evaluate all neighbors in the Hamming ball. Instead, if the objective function is a vector Mk Landscape fulfilling some requirements described in Theorem 3.1, we can design an efficient next improvement hill climber for the radius neighborhood that only stores a linear number of Scores and requires a constant time to update them.

3.1 Scores Update

Using the fact that each component of the objective vector function is an Mk Landscape, we can write:


where we use to represent the score of the subfunction for move . Let us define as the binary string such that the -th element of is 1 if and only if depends on variable . The vector can be considered as a mask that characterizes the variables that affect . Since has bounded epistasis , the number of ones in , denoted with , is at most . By the definition of , the next equalities immediately follow.


Equation (6) claims that if none of the variables that change in the move characterized by is an argument of the Score of this subfunction is zero, since the value of this subfunction will not change from to . On the other hand, if depends on variables that change, we only need to consider for the evaluation of the changed variables that affect . These variables are characterized by the mask vector . With the help of (6) we can re-write (4):


Equation (7) simply says that we don’t have to consider all the subfunctions to compute a Score. This can reduce the run time to compute the scores from scratch.

During the search, instead of computing the Scores using (7) after every move, it is more efficient in time to store the Scores of the current solution in memory and update only those that are affected by the move.

In the following, and abusing of notation, given a move we will also use to represent the set of variables that will be flipped in the move (in addition to the binary string).

For each of the Scores to update, the change related to subfunction can be computed with the help of and . The component will be updated by subtracting and adding . This procedure is shown in Algorithm 1, where the term represents the -th component of the Score of move stored in memory and is the set of moves whose scores are stored. In the worst (and naïve) case is the set of all strings with at most ones, , and . However, we will prove in Section 3.2 that, for some vector Mk Landscapes, we only need to store Scores to identify improving moves in a ball of radius .

2:for  such that  do
3:   for  such that  do
6:   end for
7:end for
Algorithm 1 Efficient algorithm for Scores update

3.2 Scores Decomposition

Some scores can be written as a sum of other scores. The benefit of such a decomposition is that we do not really need to store all the scores in memory to have complete information of the influence that the moves in a Hamming ball of radius have on the objective function . The co-occurrence graph has a main role in identifying the moves whose Scores are fundamental to recover all the improving moves in the Hamming ball.

Let us denote with the subgraph of induced by , that is, the subgraph containing only the vertices in and the edges of between vertices in .

Proposition 1 (Score decomposition)

Let be two moves such that and variables in do not co-occur with variables in . In terms of the co-occurrence graph this implies that there is no edge between a variable in and a variable in and, thus, . Then the score function can be written as:


Using (7) we can write:

Since variables in do not co-occur with variables in , there is no such that and at the same time. Then we can write:

and the result follows. ∎

For example, in the vector Mk Landscape of Figure 1 the scoring function can be written as the sum of the scoring functions and , where we used to denote the binary string having 1 in positions , and the rest set to 0.

A consequence of Proposition 1 is that we only need to store scores for moves where is a connected subgraph. If is not a connected subgraph, then there are sets of variables and such that and and, applying Proposition 1 we have . Thus, we can recover all the scores in the Hamming ball of radius from the ones for moves where and is connected. In the following we will assume that the set of Algorithm 1 is:


3.3 Memory and Time Complexity of Scores Update

We will now address the question of how many of these Scores exist and what is the cost in time of updating them after a move.

Lemma 1

Let be a vector Mk Landscape where each Boolean variable appears in at most subfunctions . Then, the number of connected subgraphs with size no greater than of the co-occurrence graph containing a given variable is .


For each connected subgraph of containing we can find a spanning tree with at the root. The degree of any node in is bounded by , since each variable appears at most in subfunctions and each subfunction depends at most on variables. Given a tree of nodes with at the root, we have to assign variables to the rest of the nodes in such a way that two connected nodes have variables that are adjacent in . The ways in which we can do this is bounded by . We have to repeat the same operation for all the possible rooted trees of size no greater than . If is the number of rooted trees with vertices, then the number of connected subgraphs of containing and with size no greater than nodes is bounded by


where we used the result in [10] for the asymptotic behaviour of :


Lemma 1 provides a bound for the number of moves in that contains an arbitrary variable . In effect, the connected subgraphs in containing corresponds to the moves in that flip variable . An important consequence is given by the following theorem.

Theorem 3.1

Let be a vector Mk Landscape where each Boolean variable appears in at most subfunctions. Then, the number of connected subgraphs of of size no greater than is , which is linear in if is independent of . This is the cardinality of given in (9).


The set of connected subgraphs of with size no greater than is the union of connected subgraphs of of size no greater than that contains each of the variables. According to Lemma 1 the cardinality of this set must be . ∎

The next Theorem bounds the time required to update the scores.

Theorem 3.2

Let be a vector Mk Landscape where each Boolean variable appears in at most subfunctions . The time required to update the Scores using Algorithm 1 is where is a bound on the time required to evaluate any subfunction .


Since each variable appears in at most subfunctions, the number of subfunctions containing at least one of the bits in is at most , and this is the number of times that the body of the outer loop starting in Line 2 of Algorithm 1 is executed. Once the outer loop has fixed a pair , the number of moves with is the number of moves that contains a variable in . Since and using Lemma 1, this number of moves is . Line 5 of the algorithm is, thus, executed times, and considering the bound on the time to evaluate the subfunctions, the result follows. ∎

Since , the time required to update the Scores is if does not depend on . Observe that if is , then the number of subfunctions of the vector Mk Landscape is . On the on the hand, if every variable appears in at least one subfunction (otherwise the variable could be removed), . Thus, a consequence of is that .

4 Multi-Objective Hamming-Ball Hill Climber

We have seen that, under the hypothesis of Theorem 3.1, a linear number of Scores can provide information of all the Scores in a Hamming ball of radius around a solution. However, we need to sum some of the scores to get complete information of where all the improving moves are, and this is not more efficient than exploring the Hamming ball. In order to efficiently identify improving moves we have to discard some of them. In particular, we will discard all the improving moves whose scores are not stored in memory. In [3] the authors proved for the single-objective case that if none of the stored scores is improving, then it cannot exist an improving move in the Hamming ball of radius around the current solution. Although not all the improving moves can be identified, it is possible to identify local optima in constant time when the hill climber reaches them. This is a desirable property for any hill climber. We will prove in the following that this result can be adapted to the multi-objective case.

If one of the scores stored indicates a strong improving move, then it is clear that the hill climber is not in a local optima, and it can take the move to improve the current solution. However, if only weak improving moves can be found in the Scores store, it is not possible to certify that the hill climber reached a local optima. The reason is that two weak improving moves taken together could give a strong improving move in the Hamming ball. For example, let us say that we are exploring a Hamming ball of radius , variables and do not co-occur in a two-dimensional vector function, and and . Moves and are weak improving moves, but the move is a strong improving move. We should not miss that strong improving move during our exploration.

To discover all strong improving moves in the Hamming ball we have to consider weak improving moves. But we saw in Section 3

that taking weak improving moves is dangerous because they could make the algorithm to cycle. One very simple and effective mechanism to avoid cycling is to classify weak improving moves according to a weighted sum of their score components.

Definition 8 (-improving move and -score)

Let be a vector Mk Landscape, and a -dimensional weight vector. We say that a move is -improving for solution if , where denotes the dot product of vectors. We call the -score of move for solution .

Proposition 2

Let be a vector Mk Landscape, and a -dimensional weight vector with for . If there exists a strong improving move in a ball of radius around solution , then there exists such that .


Let us say that is a strong improving move in the Hamming ball of radius . Then there exist moves such that . Since is strong improving and all , we have . There must be a with such that . ∎

Proposition 2 ensures that we will not miss any strong improving move in the Hamming ball if we take the weak improving moves with an improving -score. Thus, our proposed Hill Climber, shown in Algorithm 2, will select strong improving moves in first place (Line 6) and -improving moves when no strong improving moves are available (Line 8). In this last case, we should report the value of solution , since it could be a non-dominated solution (Line 9). The algorithm will stop when no -improving move is available. In this case, a local optima has been reached, and we should report this final (locally optimal) solution (Line 14). The algorithm cannot cycle, since only -improving moves are selected, and this means that an improvement is required in the direction of . A cycle would require to take a -disimproving move at some step of the climb.

1:scores vector , weight vector , initial solution
2:local optimum in (and potentially non-dominated intermediate solutions)
3: computeScores();
4:while  for some  do
5:   if there is a strong improving move  then
6:       selectStrongImprovingMove();
7:   else
8:       selectWImprovingMove();
9:      report();
10:   end if
11:   updateScores(,,);
12:   ;
13:end while
Algorithm 2 Multi-objective Hamming-Ball Hill Climber.

The procedure report in Algorithm 2 should add the reported solution to an external set of non-dominated solutions. This set should be managed by the high-level algorithm invoking the Hamming Ball Hill Climber.

For an efficient implementation of Algorithm 2, the scores stored in memory can be classified in three categories, each one stored in a different bucket: strong improving moves, -improving moves that are not strong improving moves, and the rest. The scores can be moved from one of the buckets to the other as they are updated. The move from one bucket to another requires constant time, and thus, the expected time per move in Algorithm 2 is , excluding the time required by report. This implementation corresponds to a next improvement hill climber. An approximate form of best improvement hill climber could also be implemented following the guidelines in [14].

The weight vector in the hill climber determines a direction to explore in the objective space. The use of to select the weak improving moves is equivalent to consider improving moves of the single-objective function . However, there are two main reasons why it is more convenient to update and deal with the vector scores rather than using scalar scores of . First, using vector scores we can identify strong improving moves stored in memory, while using scalar scores of it is not possible to distinguish between weak and strong improving moves. And second, it is possible to change during the search without re-computing all the scores. The only operation to do after a change of is a re-classification of the moves that are not strong improving333Distinguishing the weak, but not strong, improving moves from the strong disimproving moves in the implementation would reduce the runtime here, since only weak improving moves need to be re-classified..

Regarding the selection of improving moves in selectStrongImprovingMove and selectWImprovingMove, our implementation selects always a random one with the lowest Hamming distance to the current solution, that is, the move with the lowest value of . As stated by Theorem 3.2, such moves are faster, in principle, than other more distant moves, since the time required for updating the Scores is proportional to .

5 Experimental Results

We implemented a simple Multi-Start Hill Climber algorithm to measure the runtime speedup of the proposed Multi-Objective Hamming Ball Hill Climber of Algorithm 2. The algorithm iterates a loop where a solution and a weight vector are randomly generated and Algorithm 2 is executed starting on them. The algorithm keeps a set of non-dominated solutions, that is potentially updated whenever Algorithm 2 reports a solution. The loop stops when a given time limit is reached. In our experiments shown here this time limit was 1 minute. The machine used in all the experiments has an Intel Core 2 Quad CPU (Q9400) at 2,7 GHz, 3GB of memory and Ubuntu 14.04 LTS. Only one core of the Processor is used. The algorithm was implemented in Java 1.6.

To test the algorithm we have focused on MNK-Landscapes [1]. An MNK-Landscape is a vector Mk Landscape where all for all and each subfunction depends on and other more variables (thus, ). The subfunctions

are randomly generated using real values between 0 and 1. In order to avoid inaccuracy problems with floating point arithmetic, instead of real numbers we use integer number between 0 and

and the sum of subfunctions are not divided by . That is, each component is an NKq-Landscape [2]. We also focused on the adjacent model of NKq-Landscape. In this model the variables each depends on are consecutive, that is, . This ensures that the number of subfunctions a given variable appears in is bounded by a constant, in particular, , and Theorems 3.2 and 3.1 apply.

5.1 Runtime

There are two procedures in the hill climber that requires time. The first one is a problem-dependent initialization procedure, where the scores to be stored in memory are determined. This procedure is run only once in one run of the multi-start algorithm. In our experiments this time varies from 284 to 5,377 milliseconds.

The second procedure is a solution-dependent initialization of the hill climber starting from random solution and weight vector. This procedure is run once in each iteration of the multi-start hill climber loop, and can have an important impact on algorithm runtime, especially when there are no many moves during the execution of Algorithm 2. On the other hand, as the search progresses and the non-dominated set of solutions grows, the procedure to update it could also require a non-negligible run time that depends on the number of solutions in the non-dominated set, which could be proportional to the number of moves done during the search.

In Figure 2 we show the average time per move in microseconds (s) for the Multi-Start Hill Climber solving MNK-Landscapes where varies from to , , , the dimensions are and , and the exploration radius varies from 1 to 3. We performed 30 independent runs of the algorithm for each configuration, and the results are the average of these 30 runs. To compute the average, we excluded the time required by the problem-dependent initialization procedure.

Figure 2: Average time per move in s for the Multi-Start Hill Climber based on Algorithm 2 for a MNK-Landscape with , , , to and to .

We can observe that moves are done very fast (tens to hundreds of microseconds). This is especially surprising if we consider the number of solutions “explored” in a neighborhood. For and the neighborhood contains around 166 trillion solutions that are explored in around 1 millisecond. For all values of and the increase in the average time per move is very slow (if any) when grows. This slight growth in the average run time is due to the solution-dependent initialization and the non-dominated set update, and contrasts with the theoretically time required by a black box algorithm.

As we could expect, the value of has a great influence in the average time per move. In fact, the time is exponential in . Regarding the memory required to store the Scores, we have already seen that it is . In the particular case of the MNK-Landscapes with an adjacent interaction model and , it is not hard to conclude that the exact number of scores is , which is linear in .

5.2 Quality of the Solutions

In a second experiment we want to check if a large value of leads to better solutions. This highly depends on the algorithm that includes the hill climber. In our case, since the algorithm is a multi-start hill climber, we would expect an improvement in solution quality as we increase . But at the same time, the average time per move is increased. Thus, there must be a value of at which the time is so large that lower values for the radius can lead to the same solution quality. In Figure 3 we show the 50%-empirical attainment surfaces of the fronts obtained in the 30 independent runs of the multi-start hill climber for , , and varying from 1 to 3. The 50%-empirical attainment surface (50%-EAS) limits the region in the objective space that is dominated by half the runs of the algorithm. It generalizes the concept of median to the multi-objective case (see [9] for more details).

Figure 3: 50%-empirical attainment surfaces of the 30 independent runs of the Multi-Start Hill Climber based on Algorithm 2 for a MNK-Landscape with , , , and to .

We can see in Figure 3 that the 50%-EAS obtained for completely dominates the one obtained for , and the 50%-EAS for dominates that of . That is, increasing we obtained better approximated Pareto fronts, even of the time per move is increased. This means that less moves are done int he given time limit (1 minute) but they are more effective.

6 Conclusions and Future Work

We proposed in this paper a hill climber based on an efficient mechanism to identify improving moves in a Hamming ball of radius around a solution of a -bounded pseudo-Boolean multi-objective optimization problem. With this paper we contribute to an active line of research, sometimes called, Gray-Box optimization [6], that suggests the use of as much information of the problems as possible to provide better search methods, in contrast to the Black-Box optimization.

Our proposed hill climber performs each move in bounded constant time if the variables of the problem appears in at most a constant number of subfunctions. In practice, the experiments on adjacent MNK-Landscapes show that when and , the average time per move varies from tenths to hundreds of microseconds when the exploration radius grows from 1 to 3. This number is independent of despite the fact that the hill climber is considering a Hamming Ball of radius with up to solutions.

Further work is needed to integrate this hill climber in a higher-level algorithm including mechanisms to escape from plateaus and local optima. On the other hand, one important limitation of our hill climber is that is does not take into account constraints in the search space. Constraint management and the combination with other components to build an efficient search algorithm seem two promising and challenging directions to work in the near future.


  • [1]

    Aguirre, H.E., Tanaka, K.: Insights on properties of multiobjective mnk-landscapes. In: Evolutionary Computation, 2004. CEC2004. Congress on. vol. 1, pp. 196–203 Vol.1 (June 2004)

  • [2] Chen, W., Whitley, D., Hains, D., Howe, A.: Second order partial derivatives for NK-landscapes. In: Proceeding of GECCO. pp. 503–510. ACM, New York, NY, USA (2013)
  • [3] Chicano, F., Whitley, D., Sutton, A.M.: Efficient identification of improving moves in a ball for pseudo-boolean problems. In: Proceedings of Genetic and Evolutionary Computation Conference. pp. 437–444. ACM, New York, NY, USA (2014)
  • [4] Crama, Y., Hansen, P., Jaumard, B.: The basic algorithm for pseudo-boolean programming revisited. Discrete Applied Mathematics 29(2-3), 171–185 (1990)
  • [5] Dubois-Lacoste, J., nez, M.L.I., Stützle, T.: Anytime pareto local search. European Journal of Operational Research 243(2), 369–385 (2015), http://www.sciencedirect.com/science/article/pii/S0377221714009011
  • [6] Goldman, B.W., Punch, W.F.: Gray-box optimization using the parameter-less population pyramid. In: Proceedings of Genetic and Evolutionary Computation Conference. pp. 855–862. ACM, New York, NY, USA (2015)
  • [7]

    Heckendorn, R., Rana, S., Whitley, D.: Test function generators as embedded landscapes. In: Foundations of Genetic Algorithms. pp. 183–198. Morgan Kaufmann (1999)

  • [8] Hoos, H.H., Stützle, T.: Stochastic Local Search: Foundations and Applications. Morgan Kaufman (2004)
  • [9] Knowles, J.: A summary-attainment-surface plotting method for visualizing the performance of stochastic multiobjective optimizers. In: Proceedings of Intelligent Systems Design and Applications. pp. 552–557 (Sept 2005)
  • [10] Otter, R.: The number of trees. Annals of Mathematics 49(3), 583–599 (1948)
  • [11]

    Paquete, L., Schiavinotto, T., Stützle, T.: On local optima in multiobjective combinatorial optimization problems. Annals of Operations Research 156(1), 83–97 (2007)

  • [12] Rosenberg, I.G.: Reduction of bivalent maximization to the quadratic case. Cahiers Centre Etudes Rech. Oper. 17, 71–74 (1975)
  • [13] Taillard, E.: Robust taboo search for the quadratic assignment problem. Parallel Comput. 17(4-5), 443–455 (Jul 1991)
  • [14] Whitley, D., Howe, A., Hains, D.: Greedy or not? best improving versus first improving stochastic local search for MAXSAT. In: Proc.of AAAI-2013 (2013)
  • [15] Whitley, D.: Mk landscapes, NK landscapes, MAX-kSAT: A proof that the only challenging problems are deceptive. In: Proceedings of Genetic and Evolutionary Computation Conference. pp. 927–934. ACM, New York, NY, USA (2015)
  • [16] Whitley, D., Chen, W.: Constant time steepest descent local search with lookahead for NK-landscapes and MAX-kSAT. In: Soule, T., Moore, J.H. (eds.) GECCO. pp. 1357–1364. ACM (2012)