The Road to VEGAS: Guiding the Search over Neutral Networks

07/19/2012 ∙ by Marie-Eleonore Marmion, et al. ∙ Inria 0

VEGAS (Varying Evolvability-Guided Adaptive Search) is a new methodology proposed to deal with the neutrality property of some optimization problems. ts main feature is to consider the whole neutral network rather than an arbitrary solution. Moreover, VEGAS is designed to escape from plateaus based on the evolvability of solution and a multi-armed bandit. Experiments are conducted on NK-landscapes with neutrality. Results show the importance of considering the whole neutral network and of guiding the search cleverly. The impact of the level of neutrality and of the exploration-exploitation trade-off are deeply analyzed.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Motivations

Due to their ability to find satisfying solutions with high efficiency and effectiveness, the design of local search approaches is still a prominent issue for hard combinatorial optimization. This class of methods is also of strong interest since they are generally quite simple to implement. However, in some circumstances, these methods may have more difficulties. One of the most critical situation arises when the problem under study holds neutrality, which is the case for many problems from combinatorial optimization, like scheduling or satisfiability. This property means that a lot of different solutions share the same fitness value. In such a case, natural questions hold: Once the neutrality of a problem is known, how could the search exploit it? How could the search be guided to exploit this neutrality with success? The aim of this article is to propose an efficient approach, based on simple local search principles and adaptive mechanisms, that exploits the inherent neutrality to a problem, without inhibiting too much the search for neutrality-free problems. Therefore, an experimental comparison will be driven between approaches that do or do not exploit the neutrality explicitly during the search process.

The central idea of the article is based on two observations. On the one hand, up to now, the neutrality property has been under-exploited on the design of search algorithms, with the exceptions of a few examples that will be detailed later in the paper. On the other hand, adaptive search, based on a multi-armed bandit framework, has been successfully applied to parameter control in the context of evolutionary algorithms, in particular in adaptive operator selection

[7]. In this paper, the goal is not to adapt parameters along the search. Instead, we propose an original adaptive algorithm, based on a multi-armed bandit framework, to guide the search over neutral networks. When the search is stuck on a plateau, VEGAS adaptively selects the more-promising solution whose neighborhood has to be explored. Hence, two main questions are addressed in this paper:

  • How to consider neutrality, and then neutral networks, during the search process? How to avoid an arbitrary choice to be made during a neutral search?

  • How to guide the search adaptively over neutral networks in order to explore the neighborhood of a solution that is more likely to escape a plateau by the top?

The paper is organized as follows. Section 2 introduces fundamental definitions together with previous works on neutrality-based and adaptive search methods. The proposed VEGAS algorithm, where the search is adaptively guided by the evolvability of solutions, is introduced in Section 3. Section 4 presents experimental results on the performance of VEGAS, on the influence of its single problem-independent parameter and on the influence of the degree of neutrality of the problem at hand. The last section gives a summary of results and discusses future works.

2 Background

This section introduces the main concepts dealing with landscape analysis and the design of neutrality-based and adaptive search methods.

2.1 Local Search, Neutrality and Evolvability

A fitness landscape [18] is a triplet where is a set of admissible solutions (i.e. the search space), is a neighborhood structure, and is a fitness function that can be pictured as the height of the corresponding solutions, here assumed to be maximized. A neighborhood structure is a mapping function that assigns a set of solutions to any feasible solution . is called the neighborhood of , and a solution is called a neighbor of . We then may define a local optimum as: solution is a local optimum iff no neighbor has a better fitness value: , .

The importance of selective neutrality as a significant factor in evolution was stressed by Kimura [12] in the context of evolutionary theory. The relevance and benefits of neutrality for the robustness and evolvability in living systems have been recently discussed by Wagner [23]. There is a growing evidence that such large-scale neutrality is also present in artificial landscapes. Not only in combinatorial fitness landscapes such as randomly generated SAT instances [8], cellular automata rules [22] and many others, but also in complex real-world design and engineering applications such as evolutionary robotics [17, 16], evolvable hardware [10, 21]

, genetic programming 

[2, 24] and scheduling [14].

A neutral neighbor of a given solution is a neighboring solution with the same fitness value: . Given a solution , the set of neutral solutions in its neighborhood is defined by . The neutral degree of a solution is the number of its neutral neighbors. A fitness landscape is said to be neutral if there are many solutions with a high neutral degree.

A neutral network, or plateau, denoted here by NN, is a connected sub-graph of solutions with respect to neighborhood relation whose vertices are solutions with the same fitness value. There exists an edge between solutions and when is a neutral neighbor of . A portal in a NN is a solution that has at least one neighbor with a better fitness value, i.e. a greater fitness value in a maximization context.

2.2 Neutrality-based Search

When dealing with neutrality, two extreme local search approaches are usually designed. The first one simply ignores neutrality. One of the simplest algorithm is the first-improvement hill-climbing (FIHC), where the first evaluated neighbor that strictly

improves the current fitness value is accepted. In other words, the heuristic does not move on a neutral neighboring solution, and prefers to keep exploring the neighborhood of the current solution, assuming that the neutral neighbor will not lead to better solutions. At the opposite, some local search approaches always accept the first visited neighbor with the same fitness value. A typical example is the

Netcrawler process [4], that consists of a random neutral walk with a mutation mode adapted to local neutrality. The per-sequence mutation rate is optimized to jump from one NN to another. Stewart [19] also proposed an Extrema Selection for evolutionary optimization in order to move on promising solutions in a neutral search space. The selection aims at accelerating the evolution during the search process once most solutions from the population have reached the same level of performance. To each solution is assigned an endogenous performance during the selection step to explore the search space area with the same performance more largely, with the assumption that it will help to reach solutions with better fitness values. Additionally, the NILS (Neutrality-based Iterated Local Search) algorithm, recently proposed in [13], shows interesting results and enhances the interest of taking neutrality into account for flowshop scheduling problems.

All those heuristics focus the search on the last solution found along the NN. Their bet is that the new accepted solution has more chance to lead to better solutions than the previous one, because no better solution was (yet) evaluated in its neighborhood. But, when no better neighbor has been found again, the heuristic prefers to move to a new solution (in another part of the search space) than to go back on a previously accepted solution, even if it could seem more promising a posteriori. Hence, nothing motivates the choice of exploring the neighborhood of the last-accepted solution from the NN rather than any other. Most of the time, such a choice appears quite arbitrary. We believe that there might exist a better trade-off between these two extreme cases (ignoring neutrality, and starting with the last-accepted neutral solution), by keeping memory of the solutions evaluated along the NN.

2.3 Adaptive Search

Autonomous self-management search receives more and more attention from the past years due to the increasing complexity of the search methods and problems. The general goal of those methods is to automatically adapt their mechanism to the changing problem conditions. The aim of parameter control is the on-line setting of parameters such as the representation of solution, the stochastic operators (mutation, crossover), the selection operators, the application rate of those operators, etc. [6]

. In combinatorial optimization, adaptive methods are often preferred over self-adaptive ones which increase the size of the search space. From the search history, adaptive methods select the new parameter setting. Different rules are used for selection: probability matching, adaptive pursuit

[20] which attaches a probability of success to each operator, the multi-armed bandit [5]

, and so on. The multi-armed bandit framework is a sequential learning model, mostly studied in game theory, dealing with the trade-off between exploration and exploitation. It considers a set of

independent arms, each one having some reward following an unknown distribution. An optimal selection strategy maximizes the cumulative reward along time. The Upper Confidence Bound (UCB) strategy [1], which is asymptotically optimal, has been used in the context of adaptive operator selection [5]. To each arm, which is an operator in that context, is associated an empirical reward which reflects its quality. Then, the operator (arm) with the best score is selected, where the upper confidence bound of the reward defines this score:


where for the time step , is the empirical reward of operator , is the number of times that the operator has been tried. is a problem-independent parameter representing a scaling factor which controls the trade-off between exploitation and exploration.

Figure 1: Example of the AUC reward computed for an operator . The list of fitness values produced by each possible operator is sorted in the decreasing order. A curve from the point is drawn according to this list: When the operator is concerned, the curve follows the axe, otherwise it follows the other axe. The area under this curve is the reward of the operator [7].

The measure of the operator quality (reward) has an impact on the efficiency of the method. Different measures of reward have been proposed: the average fitness improvement between parent and offspring, the maximum fitness improvement from a time windows [5], or more recently the Area Under the Curve [7]. This credit assignment uses the comparison of solution fitness values produced by each operator. This method uses the rank instead of a normalized fitness improvement. Then, the bandit adaptive technique becomes more invariant to fitness function transformation, and the sensitivity according to the parameter decreases. The Figure 1 gives an example of AUC computation. The details is not given due to space limitation, the reader is refereed to [7].

3 Vegas

This section presents the VEGAS (Varying Evolvability-Guided Adaptive Search) algorithm. As seen in the previous section, the First-Improvement Hill-Climbing (FIHC) algorithm and the Netcrawler (NC) algorithm are based on different strategies. The first one does not take neutrality into account, whereas the second one proposes a specific way to deal with it (by always exploring the last accepted - better or neutral - solution). The aim of VEGAS is to take the neutrality explicitly into account and to propose an efficient way to guide the search on NN. First, we reconsider the search over a NN. Second, a guiding strategy, based on the evolvability of solutions, is proposed.

3.1 Considering Neutral Networks

First, let us define some useful terms. A solution is said to be evaluated if its fitness value has been computed. A solution is marked as visited if its neighborhood has been completely evaluated, otherwise it is non-visited. The neighborhood of a solution is explored in a random order without repetition. Let us remind that a NN, also known as plateau, is a set of neighboring solutions with the same fitness value. The set of evaluated solutions from the current NN is denoted by .

Let us consider a simple local search algorithm that iteratively improves a current solution by exploring its neighborhood. As soon as a strictly improving neighboring solution is found, it is accepted and replaces the current solution. As long as a neutral neighboring solution is evaluated, a particular strategy is applied in order to iteratively build the set . As opposed to a NC, the main idea of our approach is to consider the whole set of evaluated solutions from the current NN (i.e. ) instead of a single one (the last-evaluated solution). Now, the question is: Which solution to select in order to evaluate a new neighboring solution? For instance, when no particular information is computed, this solution can be selected at random. The algorithm stops once all solutions from are marked as visited, i.e. the neighborhood of all solutions from has been explored. When there is no neutrality, this algorithm behaves like a FIHC.

3.2 Guiding the Search over Neutral Networks

  while  such that is not visited do
     Choose a solution at random (no repetition)
     if  then
     else if  then
     end if
     Update rewards()
  end while
Algorithm 1 VEGAS

Instead of randomly choosing the next solution to explore, the selection can be guided. Indeed, on a NN, solutions can be explored. Only one has to be chosen in order to evaluate one of its neighbors. To do so, we here propose to use the evolvability of solutions.

Algorithm 1 presents the general framework of VEGAS. All the evaluated solutions of the current NN are recorded in . Then, a select method returns a solution . A new neighbor is evaluated (without repetition). If has a better fitness value, the set becomes the singleton . Otherwise, if has the same fitness value than the current fitness value, it is added to . A reward is computed for each solution from , and the select method is applied.

The select method is one of the main component of VEGAS. For instance, if select() always returns the last neutral solution evaluated, this algorithm behaves like a NC. Here, we consider that the solution with the highest score, as given by equation (1), is selected. Thus, every time a new solution is evaluated in the neighborhood of a solution , is recorded to update the score values of all solutions in according to the credit assignment under consideration. The reward is based on the AUC (see Section 2.3). The arms are the solutions from , and the fitness values of the evaluated neighbors are used to compare solutions. The AUC gives a way to compare the evolvability of evaluated solutions on a NN. For instance, when the fitness value of neighbors from solution are better than those of solution , the AUC of is higher than the one of .

The parameter in (1) allows to tune the trade-off between exploration and exploitation. Here, it affects the exploration and the exploitation of the neighborhood of solutions from the NN. When is large, it gives more weight to exploration: the search promotes the sampling of neighborhoods with few solutions evaluated. When is small, it gives more weight to exploitation: the search promotes the sampling of neighborhoods where the best neighbors have been evaluated so far. This is based on the assumption that solutions with a higher evolvability are more likely to find a portal.

4 Experiments

4.1 NK-Landscapes with Neutrality

The family of -landscapes is a problem-independent model used for constructing multimodal landscapes [11]. Such a model is of high interest in order to design new search approaches. Parameter refers to the number of bits in the search space (i.e. the binary string length), and to the number of bits that influences a particular bit from the string (the epistatic interactions). By increasing the value of from 0 to , -landscapes can be gradually tuned from smooth to rugged. The fitness function (to be maximized) of a -landscape is defined on binary strings of size . An ‘atom’ with a fixed epistasis level is represented by a fitness component , associated with each bit . Its value depends on the allele at bit and also on the alleles at other epistatic positions ( must be defined between and ). The fitness of a solution corresponds to the mean value of its fitness components : , where . In the original

-landscapes, the fitness components are uniformly distributed in the range

, so that it is very unlikely that the same fitness value is assigned to two different solutions. In other words, the neutrality is null.

In our study, we will use an extension of this initial model in which neutrality has been added. The way the neutrality is artificially included has an important impact on the structure of the resulting landscapes. Several models of neutrality have been proposed to generalize the initial -landscapes by adding a tunable level of neutrality. Among others, there are the -landscapes ( is for probabilistic) [3], and the -landscapes ( is for quantized) [15]. -landscapes are very similar to -landscapes, except that the fitness contribution are null with the rate .In the -landscapes, that will be used in the paper, the fitness contributions are integer values belonging to the range . The total fitness is scaled by a factor in order to translate it in the range . As indicated by Geard et al. [9] in their comparison of neutral landscapes, -landscapes are similar to the -landscapes in several aspects. The -landscapes look like -landscapes in which rugged hillsides have been flattened into plateaus. The smaller , the higher the level of neutrality.

4.2 Experimental Design

In order to study the performance of the proposed VEGAS algorithm, we compare it with three other approaches:

  • FIHC: a First-Improvement Hill Climbing algorithm, that strictly improves the current fitness value during the search;

  • NC: a Netcrawler algorithm [4], that allows neutral moves to be performed during the search.

  • F2NS: a Fair Neutral Network Search algorithm, that evaluates a random neighbor (without replacement) of a random solution from the NN.

We experiment these algorithms on randomly-generated -landscapes with , and . They will give us the opportunity to compare these algorithms according to different configurations with respect to neutrality and non-linearity. The neighborhood is defined with the bit-flip operator of one bit. For every instance, independent executions are performed.

All the algorithms start with a random solution. The stopping condition is given in terms of a maximal number of evaluations, set to . The four algorithms could converge before the stopping condition. Thus, they have all been included in a random-restart framework: when an algorithm seems to have converged, the search restarts with a new random solution (keeping the best ever found solution). For the FIHC, the search stops when the current solution is a local optimum (no neighbor has a strictly higher fitness). For the NC, the search stops on solutions which are strict local optima where the current can not be strictly improve. For the NC, we fix a second maximal number of evaluations denoted by . This number has been set according to and from preliminary experiments. For every instance, independent runs have been performed. For each run, the number of evaluations needed to converge is recorded. The maximum of this number over the is the value of (given in Table 1). For the F2NS and VEGAS, we consider that the search converged when cover the whole NN, and no portal is available on this NN (local optimum plateau). The VEGAS algorithm has a single problem-independent parameter: , which allows to control the trade-off between the exploration and the exploitation of solutions neighborhood from the NN. Following [7], multiple -values are investigated: . Let VEGAS denote a VEGAS instance with .

q K 2 4 6 8
2 23,772 27,950 7,733 6,143
3 1,891 1,648 1,987 1,921
4 8,198 2,000 3593 1189
Table 1: Maximum number of moves on the same NN for NC.
(a) (b)
(c) (d)
Figure 2: Average normalized fitness found according to parameter after evaluations for (a) , (b) , (c) , (d) . Results for VEGAS, F2NS, FIHC and NC.

4.3 Experimental Results and Discussion

The following experiments first deal with the overall performance of the VEGAS algorithm against F2NS, FIHC and NC. Next, the influence of the parameter  is deeply analyzed and first conclusions on the link between -values and the dynamics of the algorithm are given.

4.3.1 Performance Analysis

In this section, the four algorithms under study are compared with each other. As far as VEGAS is concerned, only one parameter () is to be set. The influence of this parameter is studied in the next section. Here we use a value of , as it leads to overall good performance.

With respect to the non-linearity () and to the level of neutrality (), the fitness values of the solutions found are not comparable. Hence, in order to compare the performance of the four algorithms according to the parameters and

, the fitness values have been normalized using the average and the standard deviation of all fitness values found for the same problem instance. Such an approach brings the average performance around zero and enlarges the extreme behaviors. The average value

and the standard deviation of all fitness values are computed with the runs performed by each algorithm. For a fitness value , its normalized value is set to: . The performance of a given algorithm on a particular -landscape is given in terms of this average -value over the executions.

The respective performance of all the algorithms is given in Figure 2. For each -value, the performance is plotted with respect to . For , VEGAS and F2NS clearly outperform FIHC and NC. The comparison of the performance of VEGAS and F2NS is more difficult as, even if VEGAS always outperforms F2NS, both are very close and the difference may not be statistically significant (see below). For , Figure 2 (a) does not show a clear result. Indeed, for , NC gives the best performance, but it becomes almost the worst approach for , where the best approach seems to be FIHC.

In order to assess these results, a Wilcoxon two-sample paired signed rank test is used with the null hypothesis that the median performance of the paired differences of the two algorithms under comparison is null. Table

2 gives the output of the Wilcoxon test.

q=2 q=3 q=4



Table 2: Wilcoxon paired tests on the 100 runs between the 4 algorithms. means both algorithms are not significantly different, means the algorithm of the row outperforms the one of the column and means for the contrary.
(a) (b) (c)
Figure 3: Average fitness found by VEGAS as a function of C. (a) (b) (c) . The horizontal dotted line is the performance of F2NS.

These results confirm that:

  • Except for , VEGAS and F2NS always significantly outperform FIHC and NC. It shows that () it is worth exploring the NN (in contrast to FIHC), and that () the way to do it has a large influence. Indeed, even if a method selects a solution at random on the NN (like F2NS), it may outperform a method that always selects the last-evaluated solution (like NC).

  • VEGAS is never outperformed by F2NS. It shows that guiding the search over a NN allows to obtain better solutions.

In summary, exploring NN is a good way to guide the search over neutral landscapes, at last for a reasonable level of neutrality. However, this must be done carefully, and it seems to be better to randomly choose the next solution to explore than making always the same arbitrary choice. An alternative that shows interesting results is to pursue the search from the solution with the best evolvability.

4.3.2 Impact of Parameter

In this section, we analyze the performance of VEGAS according to the setting of its (single) problem-independent parameter: . Comparing the results obtained for all the instances, the efficiency of VEGAS does not seem to be clearly sensitive to the parameter . This is confirmed by the Wilcoxon paired test that indicates that no general trend can be found. However, for some instances, VEGAS with exploration () outperforms VEGAS with exploitation () in average.

This is illustrated in Figure 3 for , that gives the average fitness values obtained according to parameter . Similar figures were obtained for and . Indeed, the exploration of NN () gives better results than their exploitation (). In Figure 3, there is also a dotted line representing the average fitness value obtained with F2NS (when a solution is chosen at random among the NN). This confirms that VEGAS outperforms F2NS, but also that both methods obtain good performance, F2NS may produce better results than some versions of VEGAS, typically when .

4.3.3 Impact of Neutrality

Figure 4: Average number of evaluated solutions per NN with respect to the neutral degree.


q K 2 4 6 8
2 20.53 16.54 14.21 12.2
3 12.91 10.05 8.77 7.64
4 9.25 7.47 6.47 5.54
Table 3: Average neutral degree of the NKq instances.

Table 3 gives the average value, over 10 000 random solutions, of the neutral degree according to the -landscape under study. The neutrality decreases when and/or increase. Figure 4 gives the average number of solutions evaluated per NN by VEGAS according to the neutral degree. It shows the influence of the neutrality on the number of evaluated solutions on each encountered NN. This number increases exponentially with the neutral degree. Let us indicate that the two picks correspond to . As we already pointed out, when (small epistasis), VEGAS has a different behavior.

4.3.4 Exploration vs. Exploitation

(a) (b)
(c) (d)
Figure 5: The average number of NN (top) and the average number of evaluated solutions of each NN (bottom). On the left, the average quantities are compared according to according to the parameter . On the right, they are compared according to between 2 VEGAS algorithms (with ) and F2NS. for all plots, .

Previous experiments show that the exploration of a larger number of solutions from the NN gives, in general, better performance. In order to analyze this in more details, some statistics have been computed to study the dynamics of the search. Two main statistics are computed here: () the number of NN evaluated, () the number of solutions evaluated on each NN, which corresponds to the size of the neutral networks explored part.

Figure 5 gives (a) the average number of NN and (c) the average number of solutions evaluated on each NN according to . First, there is a clear difference between VEGAS with exploitation and VEGAS with exploration. Indeed, for , average values are similar. The same happens for . The different curves may be cut around into two homogeneous parts. Second, the higher the average number of NN, the smaller the average number of evaluated solutions per NN. This attests a large difference on the behavior of the VEGAS algorithm regarding the -values. In other words, when is turned to exploration, more NN are explored, but the portion of evaluated solutions is small. When is turned to exploitation, few NN are explored, but they are deeply evaluated. Such dynamics may explain that VEGAS with exploration gives, in some cases, a better performance than VEGAS with exploitation.

Figure 5 (b) and (d) also gives the same values with respect to for different algorithms: VEGAS with exploration (VEGAS), VEGAS with exploitation (VEGAS) and F2NS (random choice). Let us remark that similar trends happen for . It appears that the F2NS approach has a behavior “in-between” the two VEGAS algorithms. As we previously seen on Figure 3, the performance of F2NS is either below VEGAS or “in-between” the two VEGAS variants. A natural conclusion is that this balance between exploration and exploitation is a critical issue for the performance of the algorithm.

5 Conclusions and Future Works

This work proposes a new methodology to deal with neutral combinatorial optimization problems. In this approach, all solutions identified on a plateau are considered in order to help the search to progress. Then, the most promising solution evaluated on the plateau is selected adaptively, based on the evolvability of solutions. VEGAS is an adaptive search algorithm using the multi-armed bandit framework and the ‘area under the curve’ credit assignment principle [7].

An experimental study on -landscapes with neutrality has been conducted. It first shows that randomly choosing a solution on the plateau outperforms a netcrawler-based multi-start local search for a reasonable level of neutrality. The experimental analysis also shows that VEGAS generally gives better results than selecting a solution at random on the plateau. The VEGAS dynamics is different depending on the level of neutrality. Moreover, VEGAS requires a single problem-independent parameter, that allows to tune the trade-off between the exploration and the exploitation of the plateau. The influence of this parameter on the search dynamics has been deeply analyzed.

This approach shows encouraging results and open future research directions. As in adaptive operator selection [7], we need to test others credit assignment methods, probably more specific to neutral landscapes. Moreover, similar experiments will allow to better understand the dynamics of the VEGAS algorithm on other combinatorial optimization problems where neutrality arises, such as in flowshop scheduling.


  • [1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47:235–256, May 2002.
  • [2] W. Banzhaf. Genotype-phenotype-mapping and neutral variation - a case study in genetic programming. In PPSN III: Third Conference on Parallel Problem Solving from Nature, p322–332, London, UK, 1994. Springer-Verlag.
  • [3] L. Barnett. Ruggedness and neutrality - the NKpfamily of fitness landscapes. In C. Adami, R. K. Belew, H. Kitano, and C. Taylor, editors, ALIFE VI, Proceedings of the Sixth International Conference on Artificial Life, p18–27. ALIFE, The MIT Press, 1998.
  • [4] L. Barnett. Netcrawling, optimal evolutionary search with neutral networks. In

    Proceedings of the 2001 Congress on Evolutionary Computation, CEC 2001

    , p30–37. IEEE Press, 2001.
  • [5] L. Da Costa, A. Fialho, M. Schoenauer, and M. Sebag. Adaptive operator selection with dynamic multi-armed bandits. In M. K. et al., editor, GECCO’08: Proc. 10th Annual Conference on Genetic and Evolutionary Computation, p913–920. ACM Press, July 2008.
  • [6] A. E. Eiben, Z. Michalewicz, M. Schoenauer, and J. E. Smith. Parameter control in evolutionary algorithms. In Parameter Setting in Evolutionary Algorithms, p19–46. 2007.
  • [7] A. Fialho, M. Schoenauer, and M. Sebag. Toward comparison-based adaptive operator selection. In J. B. et al., editor, GECCO’10: Proc. 12th Annual Conference on Genetic and Evolutionary Computation, pages 767–774. ACM Press, July 2010.
  • [8] J. Frank, P. Cheeseman, and J. Stutz. When gravity fails: local search topology.

    Journal of Artificial Intelligence Research

    , 7:249–281, 1997.
  • [9] N. Geard, J. Wiles, J. Hallinan, B. Tonkes, and B. Skellett. A comparison of neutral landscapes - nk, nkp and nkq. In Proceedings of the Congress on Evolutionary Computation, 2002. CEC ’02., p205–210, 2002.
  • [10] I. Harvey and A. Thompson. Through the labyrinth evolution finds a way: a silicon ridge. In T. Higuchi, M. Iwata, and W. Liu, editors, Evolvable Systems: From Biology to Hardware, First International Conference, ICES 96, volume 1259 of LNCS, p406–422. Springer, Berlin, 1996.
  • [11] S. A. Kauffman. The Origins of Order. Oxford University Press, New York, 1993.
  • [12] M. Kimura. The neutral theory of molecular evolution. Cambridge University Press., 1983.
  • [13] M.-E. Marmion, C. Dhaenens, L. Jourdan, A. Liefooghe, and S. Verel. NILS: a Neutrality-based Iterated Local Search and its application to flowshop scheduling. In European Conference on Evolutionary Computation in Combinatorial Optimisation (EvoCop’2011), LNCS. Springer-Verlag, 2011.
  • [14] M.-E. Marmion, C. Dhaenens, L. Jourdan, A. Liefooghe, and S. Verel. On the neutrality of flowshop scheduling fitness landscapes. In Learning and Intelligent OptimizatioN (LION 5), LNCS. Springer-Verlag, 2011.
  • [15] M. Newman and R. Engelhardt. Effect of neutral selection on the evolution of molecular species. Proc. R. Soc. London B., 256:1333–1338, 1998.
  • [16] P. Smith, T.and Husbands, P. Layzell, and M. O’Shea. Fitness landscapes and evolvability. Evol. Comput., 10(1):1–34, 2002.
  • [17] T. M. C. Smith, P. Husbands, and M. O’Shea. Neutral networks in an evolutionary robotics search space. In Congress on Evolutionary Computation, CEC 2001, p136–145. IEEE Press, 2001.
  • [18] P. F. Stadler. Towards a theory of landscapes, volume 461, p78–163. Springer Berlin / Heidelberg, 1995.
  • [19] T. Stewart. Extrema selection: Accelerated evolution on neutral networks. In Congress on Evolutionary Computation CEC2001, p25–29. IEEE Press, 2001.
  • [20] D. Thierens. An adaptive pursuit strategy for allocating operator probabilities. In Conference on Genetic and evolutionary computation, GECCO ’05, p1539–1546, New York, NY, USA, 2005. ACM.
  • [21] V. K. Vassilev and J. F. Miller. The advantages of landscape neutrality in digital circuit evolution. In Springer, editor, 3rd International Conference on Evolvable Systems: From Biology to Hardware. LNCS., volume 1801, p252–263, 2000.
  • [22] S. Vérel, P. Collard, M. Tomassini, and L. Vanneschi. Fitness landscape of the cellular automata majority problem: view from the “Olympus”. Theor. Comp. Sci., 378:54–77, 2007.
  • [23] A. Wagner. Robustness and Evolvability in Living Systems. Princeton Uiversity Press, 2005.
  • [24] T. Yu and J. F. Miller. Neutrality and the evolvability of Boolean function landscapes. In Eurogp01, 4th European Conference on Genetic Programming, p204–217. Springer, Berlin, 2001.