1 Introduction
1.1 Evolutionary algorithms (EA)
Evolutionary algorithms are capable of solving complicated optimization tasks in which an objective function shall be maximized. is an individual from the set of feasible solutions. Infeasible solutions due to constraints may also be considered by reducing for each violated constraint. A population is a multiset of individuals from which is maintained and updated as follows: one or more individuals are selected according to some selection strategy. In generation based EAs, the selected individuals are recombined (e.g. crossover) and mutated, and constitute the new population. We prefer the more incremental, steadystate population update, which selects (and possibly deletes) only one or two individuals from the current population and adds the newly recombined and mutated individuals to it. We are interested in finding a single individual of maximal objective value for difficult multimodal and deceptive problems.
1.2 Standard selection schemes (STD)
The standard selection schemes (abbreviated by STD in the following), proportionate, truncation, ranking and tournament selection all favor individuals of higher fitness [Gol89, GD91, BT95, BT97]. This is also true for less common schemes, like Boltzmann selection [MT93]
. The fitness function is identified with the objective function (possibly after a monotone transformation). In linear proportionate selection the probability of selecting an individual depends linearly on its fitness
[Hol75]. In truncation selection the fittest individuals are selected, usually with multiplicity to keep the population size fixed [MSV94].(Linear) ranking selection orders the individuals according to their fitness. The selection probability is, then, a (linear) function of the rank [Whi89]. Tournament selection [Bak85], which selects the best out of individuals has primarily developed for steadystate EAs, but can be adapted to generation based EAs. All these selection schemes have the property (and goal!) to increase the average fitness of a population, i.e. to evolve the population toward higher fitness.1.3 The problem of the right selection pressure
The standard selection schemes STD, together with mutation and recombination, evolve the population toward higher fitness. If the selection pressure is too high, the EA gets stuck in a local optimum, since the genetic diversity rapidly decreases. The suboptimal genetic material which might help in finding the global optimum is deleted too rapidly (premature convergence). On the other hand, the selection pressure cannot be chosen arbitrarily low if we want the EA to be effective. In difficult optimization problems, suitable population sizes, mutation and recombination rates, and selection parameters, which influence the selection intensity, are usually not known beforehand. Often, constant values are not sufficient at all [EHM99]. There are various suggestions to dynamically determine and adapt the parameters [Esh91, BHS91, Her92, SVM94]. Other approaches to preserve genetic diversity are fitness sharing [GR87] and crowding [Jon75]. They depend on the proper design of a neighborhood function based on the specific problem structure and/or coding. One approach which does not require a neighborhood function based on the genome is local mating [CJ91], however it has been shown that rapid takeover can still occur for basic spatial topologies [Rud00]. Another approach which has not been widely studied is preselection [Cav70].
We are interested in evolutionary algorithms which do not require special problem insight (problem specific neighborhood function and/or coding) and is able to effectively prevent population takeover. In this paper we introduce and analyze two potential approaches to this problem: the Fitness Uniform Selection Scheme (FUSS) and the Fitness Uniform Deletion Scheme (FUDS).
1.4 The fitness uniform selection scheme
FUSS is based on the insight that we are not primarily interested in a population converging to maximal fitness, but only in a single individual of maximal fitness. The scheme automatically creates a suitable selection pressure and preserves genetic diversity better than STD. The proposed fitness uniform selection scheme FUSS (see also Figure 1) is defined as follows: if the lowest/highest fitness values in the current population are we select a fitness value uniformly in the interval . Then, the individual with fitness nearest to is selected and a copy is added to , possibly after mutation and recombination. We will see that FUSS maintains genetic diversity better than STD, since a distribution over the fitness values is used, unlike STD, which all use a distribution over individuals. Premature convergence is avoided in FUSS by abandoning convergence at all. Nevertheless there is a selection pressure in FUSS toward higher fitness. The probability of selecting a specific individual is proportional to the distance to its nearest fitness neighbor. In a population with a high density of unfit and low density of fit individuals, the fitter ones are effectively favored.
1.5 The fitness uniform deletion scheme
We may also preserve diversity through deletion rather than through selection. By always deleting from those individuals which have very commonly occurring fitness values we achieve a population which is uniformly distributed across fitness values, like with FUSS. Because these deleted individuals are “commonly occurring” in some sense this should help preserve population diversity. Under FUDS the role of the selection scheme is to govern how actively different parts of the solution space are searched rather than to move the population as a whole toward higher fitness. Thus, like with FUSS, premature convergence is avoided by abandoning convergence as our goal. However as FUDS is only a deletion scheme, the EA still requires a selection scheme which may require a selection intensity parameter to be set. Thus we do not necessarily have a parameterless EA, as we do with FUSS. Nevertheless due to the impossibility of population collapse the performance is more robust than usual with respect to variation in selection intensity. Thus FUDS is at least a partial solution to the problem of having to correctly set a selection intensity parameter.
1.6 Contents
This paper extends and supersedes the earlier results reported in the conference papers [Hut02], [LHK04] and [LH05]. Among other things, this paper: extends the previous theoretical analysis of FUSS and gives the first theoretical analysis of FUDS and of their performance when combined; presents a new method of analysis called fitness tree analysis; is the first set of experimental results which directly compares the two proposed schemes on the same problems with the same parameters, including when they are used together; gives the first full analysis of population diversity measurements for FUSS and in particular extends and corrects some of the earlier speculation about performance problems in some situations.
The paper is structured as follows:
In Section 2 we discuss the problems of local optima and population takeover [GD91] in STD, which could be lowered by restricting the number of similar individuals in a population. As we often do not have an appropriate functional similarity relation, we define a universal distance (semimetric) based on the available fitness only, which will serve our needs.
Motivated by the universal similarity relation and by the need to preserve genetic diversity, we define in Section 3 the fitness uniform selection scheme. We discuss under which circumstances FUSS leads to an (approximate) fitness uniform population.
Further properties of FUSS are discussed in Section 4
, especially, how FUSS creates selection pressure toward higher fitness and how it preserves diversity better than STD. Further topics are the equilibrium distribution, the transformation properties of FUSS under linear and nonlinear transformations of
.Another way to utilize the ability of the universal similarity relation to preserve diversity, is to use it to help target deletion. This gives us the fitness uniform deletion scheme which we define in Section 5. As this produces a population which is approximately uniformly distributed across fitness levels, like with FUSS, many of the properties of FUSS carry over to an EA using FUDS. Some of these properties are highlighted in Section 6.
In Section 7 we theoretically demonstrate, by way of a simple optimization example, that an EA with FUSS or FUDS can optimize much faster than with STD. We show that crossover can be effective in FUSS, even when ineffective in STD. Furthermore, FUSS, FUDS and STD are compared to random search with and without crossover.
In Section 8
we develop a fitness tree model, which we believe to cover the essential features of fitness landscapes for difficult problems with many local optima. Within this model we derive heuristic expressions for the optimization time of random walk, FUSS, FUDS and STD. They are compared, and a worst case slowdown of FUSS relative to STD is obtained.
There is a possible additional slowdown when including recombination, as discussed in Section 9, which can be avoided by using a scale independent pair selection. It is a “best” compromise between unrestricted recombination and recombination of similar individuals only. It also has other interesting properties when used without crossover.
To simplify the discussion we have concentrated on the case of discrete, equispaced fitness values. In many practical problems, the fitness function is continuously valued. FUSS and some of the discussion of the previous sections is generalized to the continuous case in Section 10.
Section 11 begins our experimental analysis of FUSS and FUDS. In this section we give a detailed account of the EA software we have used for our experiments, including links to where the source code can be downloaded.
Section 12 examines the empirical performance of FUSS and FUDS on the artificially constructed deceptive optimization problem described in Section 7. These results confirm the correctness of our theoretical analysis.
In Section 13 we test randomly generated traveling salesman problems.
In Section 14 we examine the set covering problem, an NP hard optimization problem which has many real world applications.
For our final test in Section 15 we look at random CNF3 SAT problems. These are also NP hard optimization problems.
Section 16 contains a summary of our results and possible avenues for future research.
2 Universal Similarity Relation
2.1 The problem of local optima
Proportionate, truncation, ranking and tournament are the standard (STD) selection algorithms used in evolutionary optimization. They have the following property: if a local optimum has been found, the number of individuals with fitness tends to increase rapidly. Assume a low mutation and recombination rate, or, for instance, truncation selection after mutation and recombination. Further, assume that it is very difficult to find an individual fitter than . The population will then degenerate and will consist mostly of after a few rounds. This decreased diversity makes it even less likely that gets improved. The suboptimal genetic material which might help in finding the global optimum has been deleted too rapidly. On the other hand, too high mutation and recombination rates convert the EA into an inefficient random search.
2.2 Possible solution
Sometimes it is possible to appropriately choose the mutation and recombination rate and population size by some insight into the nature of the problem. More often this is a trial and error process, or no single fixed rate works at all.
A naive fix of the problem is to artificially limit the number of identical individuals to a significant but small fraction . If the space of individuals is large, there could be many very similar (but not identical) individuals of, for instance, fitness . The EA can still converge to a population containing only this class of similar individuals, with all others becoming extinct. In order for the limitation approach to work, one has to restrict the number of similar individuals. Significant contributions in this direction are fitness sharing [GR87] and crowding [Jon75].
2.3 The problem of finding a similarity relation
If the individuals are coded binary one might use the Hamming distance as a similarity relation. This distance is consistent with a mutation operator which flips a few bits. It produces Hammingsimilar individuals, but recombination (like crossover) can produce very dissimilar individuals w.r.t. this measure. In any case, genotypic similarity relations, like the Hamming distance, depend on the representation of the individuals as binary strings. Individuals with very dissimilar genomes might actually be functionally (phenotypically) very similar. For instance, when most bits are unused (like introns in genetic programming), they can be randomly disturbed without affecting the properties of the individual. For specific problems at hand, it might be possible to find suitable representationindependent functional similarity relations. On the other hand, in genetic programming, for instance, it is in general undecidable whether two individuals are functionally similar.
2.4 A universal similarity relation
Here we want to take a different approach. We define the difference or distance between two individuals as
The distance is based solely on the fitness function, which is provided as part of the problem specification. It is independent of the coding/representation and other problem details, and of the optimization algorithm (e.g. the genetic mutation and recombination operators), and can trivially be computed from the fitness values. If we make the natural assumption that functionally similar individuals have similar fitness, they are also similar w.r.t. the distance . On the other hand, individuals with very different coding, and even functionally dissimilar individuals may be similar, but we will see that this is acceptable. For instance, individuals from different local optima of equal height are similar.
2.5 Relation to niching and crowding
Unlike fitness uniform optimization, diversity control methods like niching or crowding require a metric to be defined over the genome space. By looking at the relationship between and we can relate these two types of diversity control: We say that a fitness function is smooth with respect to , if being small implies that is also small, that is, is small. This implies that if is not small, also cannot be small. Thus, if we limit the number of similar individuals, as we do in fitness uniform optimization, this will also limit the number of similar individuals, as is done in crowding and niching methods. The advantage of fitness uniform optimization is that we do not need to know what is, or to compute its value. Indeed, the above argument is true for any metric on the genome space that is smooth with respect to.
On the other hand, if the fitness function is not generally smooth with respect to , then such a comparison between the methods cannot be made. However, in this case an EA is less likely to be effective as small mutations in genome space with respect to will produce unpredictable changes in fitness.
2.6 Topologies on individual space
The distance induced by the fitness function is a semimetric on the individual space (semi only because for is possible). The semimetric induces a topology on . Equal fitness suffices to declare two individuals as equivalent, i.e. is a rather small semimetric in the sense that the induced topology is rather coarse. We will see that a nonzero distance between individuals of different fitness is sufficient to avoiding the population takeover. induces the coarsest topology (is the “smallest” distance) avoiding population takeover.
2.7 The problem of genetic drift
Besides elitist selection, the other major cause of diversity loss in a population is genetic drift. This occurs due to the stochastic nature of the selection operator breeding some individuals more often than others. In a finite population this will cause some individuals to be replaced which have no close relatives, thus reducing diversity. Indeed, without a sufficient rate of mutation, eventually a population will converge on a single genome; even if no selection pressure is applied.
Although fitness uniform optimization does not attempt to address this problem, some implications can be drawn. Clearly, with fitness uniform optimization a complete collapse in diversity is impossible as individuals with a wide range of fitness values are always preserved in the population. However, within a given fitness level genetic drift can occur, although the sustained presence of many individuals in other fitness levels to breed with will reduce this effect.
Theoretical analysis of genetic drift is often performed by calculating the Markov chain transition matrices to compute the time for the system to reach an absorption state where all of the population members have the same genome. As these results can be difficult to generalize, an alternative approach has been to measure genetic drift by measuring the loss in fitness diversity in a population over time
[RPB99a]. This is interesting as fitness uniform optimization attempts to maximize the entropy of the fitness values in the population, producing a very high variance in population fitness. Thus, at least according to the second method of analysis, very little genetic drift would be evident in the population.
3 Fitness Uniform Selection Scheme (FUSS)
3.1 Discrete fitness function
In this section we propose a new selection scheme, which limits the fraction of similar individuals. For simplicity we start with a fitness function with discrete equispaced values . We call two individuals and similar if . The continuous valued case is considered later. In the following we assume . In this case, two individuals are similar if and only if they have the same fitness.
3.2 The goal
We have argued that in order to escape local optima, genetic variety should be preserved somehow. One way is to limit the number of similar individuals in the population. In an exact fitness uniform distribution there would be individuals for each of the fitness values, i.e. each fitness level would be occupied by a fraction of individuals. The following selection scheme asymptotically transforms any finite population into a fitness uniform one.
3.3 The fitness uniform selection scheme (FUSS)
FUSS is defined as follows: randomly select a fitness value uniformly from the fitness values . Then, uniformly at random select an individual with fitness . Add another copy of to .
Note the two stage uniform selection process which is very different from a one step uniform selection of an individual of (see Figure 1).
In STD, inertia increases with population size. A large mass of unfit individuals reduces the probability of selecting fit individuals. This is not the case for FUSS. Hence, without loss of performance, we can define a pure model, in which no individual is ever deleted; the population size increases with time. No genetic material is ever discarded and no finetuning in population size is necessary. What may prevent the pure model from being applied to practical problems are not computation time issues, but memory problems. If space becomes a problem we delete random individuals, as is usually done with a steady state EA.
3.4 Asymptotically fitness uniform distribution
The expected number of individuals per fitness level after selections is , where is the initial distribution. Hence, asymptotically each fitness level gets occupied uniformly by a fraction
where is the population at time . The same limit holds if each selection is accompanied by uniformly deleting one individual from the (now constant sized) population.
3.5 Fitness gaps and continuous fitness
We made two unrealistic assumptions. First, we assumed that each fitness level is initially occupied. If the smallest/largest fitness values in are we extend the definition of FUSS by selecting a fitness value uniformly in the interval and an individual with fitness nearest to (see Figure 2). This also covers the case when there are missing intermediate fitness values, and also works for continuous valued fitness functions ().
3.6 Mutation and recombination
The second assumption was that there is no mutation and recombination. In the presence of a small mutation and/or recombination rate eventually each fitness level will become occupied and the occupation fraction is still asymptotically approximately uniform. For larger rate the distribution will be no longer uniform, but the important point is that the occupation fraction of no fitness level decreases to zero for , unlike for STD. Furthermore, FUSS selects by construction uniformly in the fitness levels, even if the levels are not uniformly occupied.
4 Properties of FUSS
4.1 FUSS effectively favors fit individuals
FUSS preserves diversity better than STD, but the latter have a (higher) selection pressure toward higher fitness, which is necessary for optimization. At first glance it seems that there is no such pressure at all in FUSS, but this is deceiving. As FUSS selects uniformly in the fitness levels, individuals of low populated fitness levels are effectively favored. The probability of selecting a specific individual with fitness is inversely proportional to (see Figure 1). In an initial typical (FUSS) population there are many unfit and only a few fit individuals. Hence, fit individuals are effectively favored until the population becomes fitness uniform. Occasionally, a new higher fitness level is discovered and occupied by a single new individual, which then, again, is favored.
4.2 No takeover in FUSS
With FUSS, takeover of the highest fitness level never happens. The concept of takeover time [GD91] is meaningless for FUSS. The fraction of fittest individuals in a population is always small. This implies that the average population fitness is always much lower than the best fitness. Actually, a large number of fit individuals is usually not the true optimization goal. A single fittest individual usually suffices to solve the optimization task.
4.3 FUSS may also favor unfit individuals
Note, if it is also difficult to find individuals of low fitness, i.e. if there are only a few individuals of low fitness, FUSS will also favor these individuals. Half of the time is “wasted” in searching on the wrong end of the fitness scale. This possible slowdown by a factor of 2 is usually acceptable. In Section 7 we will see that in certain circumstances this behavior can actually speedup the search. In general, fitness levels which are difficult to reach, are favored.
4.4 Distribution within a fitness level
Within a fitness level there is no selection pressure which could further exponentially decrease the population in certain regions of the individual space. This (exponential) reduction is the major enemy of diversity, which is suppressed by FUSS. Within a fitness level, the individuals freely drift around (by mutation). Furthermore, there is a steady stream of individuals into and out of a level by (d)evolution from (higher)lower levels. Consequently, FUSS develops an equilibrium distribution which is nowhere zero. This does not mean that the distribution within a level is uniform. For instance, if there are two (local) maxima of same height, a very broad one and a very narrow one, the broad one may be populated much more than the narrow one, since it is much easier to “find”.
4.5 Steady creation of individuals from every fitness level
In STD, a wrong step (mutation) at some point in evolution might cause further evolution in the wrong direction. Once a local optimum has been found and all unfit individuals were eliminated it is very difficult to undo the wrong step. In FUSS, all fitness levels remain occupied from which new mutants are steadily created, occasionally leading to further evolution in a more promising direction (see Figure 3).
4.6 Transformation properties of FUSS
FUSS (with continuous fitness) is independent of a scaling and a shift of the fitness function, i.e. FUSS() with is identical to FUSS(). This is true even for , since FUSS searches for maxima and minima, as we have seen. It is not independent of a nonlinear (monotone) transformation unlike tournament, ranking and truncation selection. The nonlinear transformation properties are more like the ones of proportionate selection.
5 Fitness Uniform Deletion Scheme (FUDS)
For a steady state evolutionary algorithm each cycle of the system consists of both selecting which individual or individuals to crossover and mutate, and then selecting which individual is to be deleted in order to make space for the new child. The usual deletion scheme used is random deletion as this is neutral in the sense that it does not bias the distribution of the population in any way and does not require additional work to be done, such as evaluating the similarity of individuals based on their genes. Another common strategy is to use an elitist deletion scheme.
Here we propose to use the similarity semimetric defined in Section 2 to achieve a uniform distribution across fitness levels, like with FUSS, except that we achieve this by selectively deleting those members of the population which have very commonly occurring fitness values. Of course this leaves the selection scheme unspecified, indeed we may use any standard selection scheme such as tournament selection in combination with FUDS. It also means that we lose one of the nice features of FUSS as we now need to manually tune the selection intensity for our application — FUSS of course is parameterless. Nevertheless it allows us to give many FUSS like properties to an existing EA using a standard selection scheme with only a minor modification to the deletion scheme.
The intuition behind why FUDS preserves population diversity is very simple: If an individual has a fitness value which is very rare in the population then this individual almost certainly contains unique information which, if it were to be deleted, would decrease the total population diversity. Conversely, if we delete an individual with very commonly occurring fitness then we are unlikely to be losing significant diversity. Presumably most of these individuals are common in some sense and likely exist in parts of the solution space which are easy to reach. Thus the fitness uniform deletion strategy is now clear: Only delete individuals with very commonly occurring fitness values as these individuals are less likely to contain important genetic diversity.
Practically FUDS is implemented as follows. Let and be the minimum and maximum fitness values possible for a problem, or at least reasonable upper and lower bounds. We divide the interval into a collection of subintervals of equal length which we call fitness levels. As individuals are added to the population their fitness is computed and they are placed in the set of individuals corresponding to the fitness level they belong to. Thus the number of individuals in each fitness level describes how common fitness values within this interval are in the current population. When a deletion is required the algorithm locates the fitness level with the greatest number of individuals and then deletes a random individual from this level. In the case where multiple fitness levels have maximal size the lowest of these levels is used.
If the number of fitness levels is chosen too low, say 5 levels, then the resulting model of the distribution of individuals across the fitness range will be too coarse. Alternatively if a large number of fitness levels is used with a very small population the individuals may become too thinly spread across the fitness levels. While in these extreme cases this could affect the performance of FUDS, in practice we have found that the system is not very sensitive to the setting of this parameter. If is the population size then setting the number of fitness levels to be is a good rule of thumb.
For discrete valued fitness functions there is a natural lower bound on the interval length because below a certain value there will be more intervals than unique fitness values. Of course this cannot happen when the fitness function is continuous. Other than this small technical detail, the two cases are treated identically.
As FUDS spreads the individuals out across a wide range of fitness values, for small populations the EA may become inefficient as only a few individuals will have relatively high fitness. For problems which are not deceptive this is especially true as there will be little value in having individuals in the population with low to medium fitness. Of course these are not the kinds of problems for which FUDS was designed. In practice we have always used populations of between 250 and 5,000 individuals and have not observed a decline in performance relative to random deletion at the lower end of this range.
An alternative implementation that avoids discretization is to choose the two individuals that have the most similar fitness and delete one of them. An efficient implementation keeps a list of the individuals ordered by their fitness along with an ordered list of the distances between the individuals. Then in each cycle one of the two individuals with closest fitness to each other is selected for deletion. Although the performance of this algorithm was better than random deletion, it was not as good as the implementation of FUDS using bins. We conjecture that the reason for this is as follows: When there are just a few very fit individuals in the population it is quite likely that they will be highly related to each other and have very similar fitness. This means that if we delete the individuals with most similar fitness it is likely that many of the very fit individuals will be deleted. However with the bins approach this will not happen as there are typically few individuals in the high fitness bins. Thus, although deleting one of the closest individuals in terms of fitness might preserve diversity well, it also changes the pressure on the population distribution over fitness levels. This small change in distribution dynamics appears to reduce performance in practice.
6 Properties of FUDS
As FUDS uniformly distributes the population across fitness levels, like FUSS does, many of the key properties of FUSS also carry over to an EA that is using a standard selection scheme (STD) combined with FUDS deletion.
6.1 No takeover in FUDS
Under FUDS the takeover of the highest fitness level, or indeed any fitness level, is impossible. This is easy to see because as soon as any fitness level starts to dominate, all of the deletions become focused on this level until it is no longer the most populated fitness level. As a byproduct, this also means that individuals on relatively unpopulated fitness levels are preserved.
6.2 Steady creation of individuals from every fitness level
Another similarity with FUSS is the steady creation of individuals on many different fitness levels. This occurs because under FUDS some individuals on each fitness level are always kept. This makes it relatively easy for the EA to find its way out of local optima as it keeps on exploring evolutionary paths which do not at first appear to be promising.
6.3 Robust performance with respect to selection intensity
Because FUDS is only a deletion scheme, we still need to choose a selection scheme for the EA. Of course this selection scheme may then require us to set a selection intensity parameter. While this is not as desirable as FUSS, which has no such parameter, at least with FUDS we expect the performance of the system to be less sensitive to the correct setting of this parameter. For example, if the selection intensity is set too high the normal problem is that the population rushes into a local optimum too soon and becomes stuck before it has had a chance to properly explore the genotype space for other promising regions. However, as we noted above, with FUDS a total collapse in population diversity is impossible. Thus much higher levels of selection intensity may be used without the risk of premature convergence.
In some situations if very low section intensity is used along with random deletion, the population tends not to explore the higher areas of the fitness landscape at all. This can be illustrated by a simple example. Consider a population which contains 1,000 individuals. Under random deletion all of these individuals, including the highly fit ones, will have a 1 in 1,000 chance of being deleted in each cycle and so the expected life time of an individual is 1,000 deletion cycles. Thus if a highly fit individual is to contribute a child of the same fitness or higher, it must do so reasonably quickly. However for some optimization problems the probability of a fit individual having such a child when it is selected is very low, so low in fact that it is more likely to be deleted before this happens. As a result the population becomes stuck, unable to find individuals of greater fitness before the fittest individuals are killed off.
The usual solution to this problem is to increase the selection intensity because then the fit individuals are selected more often and thus are more likely to contribute a child of similar or greater fitness before they are deleted. Another is to change the deletion scheme so that these individuals live longer. This is what happens with FUDS as rare fit individuals are not deleted. Effectively it means that with FUDS we can often use much lower selection intensity without the population becoming stuck.
6.4 Transformation properties of FUDS
While with FUDS we have the added complication of having to choose the number of subintervals with which to break up the fitness values, this number is only a function of the population size and distributional characteristics of the problem. Thus any linear transformation of the fitness function has no effect on FUDS. However, nonlinear transformations will affect performance.
6.5 Problem and representation independence
Because FUDS only requires the fitness of individuals, the method is completely independent of the problem and genotype representation, i.e. how the individuals are coded.
6.6 Simple implementation and low computational cost
As the algorithm is simple and the fitness function is given as part of the problem specification, FUDS is very easy to implement and requires few computational resources.
7 A Simple Example
In the following, we use a simple example problem to compare the performance of fitness uniform selection (FUSS), random search (RAND) and standard selection (STD), each used both with and without recombination. We also examine the performance of standard selection when used with the fitness uniform deletion scheme (FUDS). We regard this problem as a prototype for deceptive multimodal functions. The example demonstrates how FUSS and FUDS can be superior to RAND and STD in some situations. More generic situations will be considered in Section 8. An experimental analysis of this problem appears in Section 12.
7.1 Simple 2D example
Consider individuals , which are tuples of real numbers, each coordinate in the interval . The example models individuals possessing up to 2 “features”. Individual possesses feature if , and feature if . The fitness function is defined as
We assume . Individuals with neither of the two features () have fitness . These “local optima” occupy most of the individual space , namely a fraction . It is disadvantageous for an individual to possess only one of the two features (), since or 2 in this case. In combination (), the two features lead to the highest fitness, but the global maximum occupies the smallest fraction of the individual space . With a fraction , the minima are in between. The example has sort of an XOR structure, which is hard for many optimizers.
7.2 Random search
Individuals are created uniformly in the unit square. The “local optimum” is easy to “find”, since it occupies nearly the whole space. The global optimum is difficult to find, since it occupies only of the space. The expected time, i.e. the expected number of individuals created and tested until one with is found, is . Here and in the following, the “time” is defined as the expected number of created individuals until the first optimal individual (with ) is found. is neither a takeover time nor the number of generations (we consider steadystate EAs).
7.3 Random search with crossover
Let us occasionally perform a recombination of individuals in the current population. We combine the coordinate of one uniformly selected individual with the coordinate of another individual . This crossover operation maintains a uniform distribution of individuals in . It leads to the global optimum if and . The probability of selecting an individual in is (we assumed that the global optimum has not yet been found). Hence, the probability that crosses with is . The time to find the global optimum by random search including crossover is still ( denotes asymptotic proportionality).
7.4 Mutation
The result remains valid (to leading order in ) if, instead of a random search, we uniformly select an individual and mutate it according to some probabilistic, sufficiently mixing rule, which preserves uniformity in
. One popular such mutation operator is to use a sufficiently long binary representation of each coordinate, like in genetic algorithms, and flip a single bit. For simplicity we assume in the following a mutation operator which replaces with probability
the first/second coordinate by a new uniform random number. Other mutation operators which mutate with probability the first/second coordinate, preserve uniformity, are sufficiently mixing, and leave the other coordinate unchanged (like the singlebitflip operator) lead to the same scaling of with (but with different proportionality constants).7.5 Standard selection with crossover
The and individuals contain useful building blocks, which could speedup the search by a suitable selection and crossover scheme. Unfortunately, the standard selection schemes favor individuals of higher fitness and will diminish the population fraction. The probability of selecting individuals is even smaller than in random search. Hence . Standard selection does not improve performance, even not in combination with crossover, although crossover is well suited to produce the needed recombination.
7.6 Fuss
At the beginning, only the level is occupied and individuals are uniformly selected and mutated. The expected time until an or individual in is created is (not , since only one coordinate is mutated). From this time on FUSS will select one half(!) of the time the individual(s) and only the remaining half the abundant individuals. When level and level are occupied, the selection probability is for these levels. With probability the mutation operator will mutate the coordinate of an individual in or the coordinate of an individual in and produces a new individual. The relative probability of creating an individual is . The expected time to find this global optimum from the individuals, hence, is . The total expected time is . FUSS is much faster by exploiting unfit individuals. This is an example where (local) minima can help the search. Examples where a low local maxima can help in finding the global maximum, but where standard selection sweeps over too quickly to higher but useless local maxima, can also be constructed.
7.7 FUSS with crossover
The expected time until an individual in and an individual in is found is , even with crossover. The probability of selecting an individual is . Thus, the probability that a crossing operation crosses with is . The expected time to find the global optimum from the individuals, hence, is , where the factor depends on the frequency of crossover operations. This is far faster than by STD, even if the levels were local maxima, since to get a high standard selection probability, the level has first to be taken over, which itself needs some time depending on the population size. In FUSS a single and a single individual suffice to guarantee a high selection probability and an effective crossover. Crossover does not significantly decrease the total time , but for a suitable 3D generalization we get a large speedup by a factor of .
7.8 FUDS with crossover
Assume that initially all of the individuals have and that we are using random selection. For any mutation the probability of the child being in is . Until becomes quite full FUDS will never delete individuals from these areas. Furthermore if an individual in is mutated then the mutant will also be in with probability . Therefore while most of the population has we can lower bound the probability of a new child being in by . It then follows that if is the size of the population we can upper bound the expected time for to contain half the total population by . Once this occurs (and most likely well before this point) crossover will produce an individual with almost immediately by crossing a member of with a member of . Thus . This gives FUDS when used with random selection scaling characteristics which are similar to FUSS. If we use a selection scheme with higher intensity our bound on the expected time for half the population to have remains unchanged as the bound holds in the worst case situation where only individuals with are selected. However higher selection intensity makes the final crossover required to find an individual with less likely. For moderate levels of selection intensity this is clearly not a significant factor and more importantly it is O(1) and independent of . Thus the order of scaling for is just for this difficult problem, which is the same as .
7.9 Simple 3D example
We generalize the 2D example to Ddimensional individuals and a fitness function
where
is the characteristic function of feature
For , coincides with the 2D example. For , the fractions of where are approximately . With the same line of reasoning we get the following expected search times for the global optimum:
This demonstrates the existence of problems where FUSS is much faster than RAND and STD, and where crossover can give a further boost to FUSS, even when it is ineffective in combination with STD.
8 FitnessTree Analysis
subsectionThe fitness tree model A general, problem independent comparison of the various optimization algorithms is difficult. We are interested in the performance for difficult fitness landscapes with many local optima.
We only consider mutation; recombination is discussed in the next section. The evolutionary neighborhood (not to be confused with similarity) of an individual is defined as the set of individuals that can be created from by a single mutation^{1}^{1}1We have “small” mutations in mind, e.g. single bit flips, not macro mutations, which connect all individuals.. Two individuals and with the same fitness are defined to belong to the same species if there is a finite sequence of mutations which transforms into and all individuals of the sequence also have fitness . Each fitness level is partitioned in this way into disjoint species. We say a species of fitness can evolve from a species of fitness , if there is a mutation which transforms an individual from the latter species to one of the former. Those species are connected by an edge in Figures 4 and 5. A species is said to be promising if it can evolve to the global optimum .
8.1 Additional definitions and simplifying assumptions

Evolution which skips fitness levels is ignored, and also devolution to species of lower fitness other than the primordial species.

Random individuals have lowest fitness with high probability, and there is only one species of fitness .

There is a fixed branching factor , i.e. each species can evolve into improved species, or represents a local optimum from which no further evolution is possible.

There is a single global optimum (or optima to be consistent with the previous item).

There are different species per fitness level (except near and where there must be fewer to be consistent with the previous items).

The probability that an individual evolves to a higher fitness is very small. In most cases a mutation keeps an individual within its species or devolves it.

The probability to evolve to one of the offspring species is uniform, i.e. for all offspring species.
We have the feeling that this picture covers the essential features of fitness landscapes for difficult problems. The qualitative conclusions we will draw should still hold when some or all of the additional simplifying assumptions are violated.
8.2 Example
Consider the case of individuals, which are realvalued
dimensional vectors, i.e.
. Let the fitness function be continuous and positive with many local maxima, which tends to zero for large arguments. This covers a large range of physical optimization problems. Mutation shall be local in , i.e. . As FUSS and the fitness tree model is only defined for discrete fitness functions, we discretize to , which is acceptable for sufficiently small . A typical fitness landscape for and together with their fitness tree are depicted in Figures 4 and 5. Since mutation is a local operation, each species is a (possibly multiply punched) connected slice (dimensional subvolume) and evolution can only occur from to (). Assumption (i) is generally satisfied. The special fitness landscapes depicted in Figures 4 and 5 also satisfy (ii,iii,iv,v) with and .8.3 Random walk
Consider a mutation induced random walk of a single individual. Due to the low evolution probability , most of the time will be spent on individuals of the lowest fitness . As evolution is a tree, there is only one evolution sequence which leads to the global optimum. At each evolution step, the correct offspring species (out of ) has to be evolved. The probability of an evolution step in the right direction, hence, is . evolution steps are necessary to reach . Therefore, the expected time to find the global maximum by random walk is . Random walk is very slow; it is exponential in the number of fitness levels to a very large basis .
8.4 Fuss
Assume that fitness levels from to are occupied. The probability that FUSS selects an individual of fitness is . Under this additional assumption that the occupation of species within one fitness level is approximately uniform most of the time, the probability of selecting an individual of the promising species, which can evolve to the global optimum, is . The probability of an evolution step in the right direction is as in the random walk case. Hence, the total expected time for an evolution in the right direction is . The total time for an evolution from to the global optimum is obtained by summation over .
8.5 Fuds
A similar analysis can be applied to FUDS. Assume again that the fitness levels from to are occupied and that the occupation of species within each fitness level is approximately uniform most of the time. Because FUDS tends to spread the population out, like FUSS, this assumption is not unreasonable. As FUDS is only a deletion scheme we must also specify a selection scheme. For our analysis we will take a very simple elitist selection scheme that half of the time selects an individual from the highest fitness level, and the other half of the time selects an individual from a lower level. It follows then that the probability of selecting a promising species is and the probability that this then results in an evolutionary step in the right direction is . Thus the total expected time for an evolutionary step in the right direction is . Therefore by summation the total expected time to evolve to the global optimum is . Of course this analysis rests on our choice of selection scheme and the assumptions about the uniformity of the population that we have made. When FUDS is used with selection schemes which are very greedy these uniformity assumptions will likely be violated and less favorable bounds could result.
8.6 Standard selection
We assumed a fixed number of species per fitness level and or offspring species. This implies that only a fraction of species can evolve to higher fitness. We assume that fitness level has been taken over, i.e. most individuals have fitness . The probability of evolution is . A significant fraction (for simplicity we assume most) of the individuals must evolve to the next fitness level before evolution with a relevant rate can occur to the next to next level. Hence, the time to take over the next fitness level is roughly . As there are fitness levels, the total time is .
We wrote as we have made two significant favorable assumptions. In order to ensure convergence, the promising species in the current fitness level has to be occupied. If we assume a uniform occupation of species within one fitness level, as for FUSS, this means that all species of the current fitness level have to be populated. As there are species, has to be at least , which can be quite large. On the other hand, STD linearly slows down with , unlike FUSS. Hence, there is a tradeoff in the choice of .
More serious is the following problem. Assume that the first individual evolved with fitness is one in a nonpromising species . Due to selection pressure it might happen that species takes over the whole population before all (or at least the promising) species with fitness can evolve from the ones of fitness . The probability to find the global optimum in the worst case scenario, where at each level only one species is occupied, is . This is the original problem of the loss of genetic diversity discussed at the outset, which lead to the invention of FUSS.
Every other fix the authors are aware of only seems to diminish the problem, but does not solve it. One fix is to repeatedly restart the EA, but the huge number of restarts might be necessary. The time is exponential in like for random walk but with a smaller basis . The true time is expected to be somewhere in between and this worst case analysis, although an unfavorable setting may never reach the global optimum ( in this case).
8.7 Performance comparison
The times , and should be regarded, at best, as rules of thumb, since the derivation was rather heuristic due to the list of assumptions. The quotient is more reliable:
and
We will give a more direct argument in Section 9 that the slowdown of FUSS relative to STD is at most .
Finally, a truism has been recovered, namely that an EA can, under certain circumstances, be much faster than random walk, that is, .
9 ScaleIndependent Selection and Recombination
9.1 Worst case analysis
We now want to estimate the maximal possible slowdown of FUSS compared to STD. Let us assume that all individuals in STD have fitness
, and once one individual with fitness has been found, takeover of level is quick. Let us assume that this quick takeover is actually good (e.g. if there are no local maxima). The selection probability of individuals of same fitness is equal. For FUSS we assume individuals in the range of and . Uniformity is not necessary. In the worst case, a selection of an individual of fitness never leads to an individual of fitness , i.e. is always useless. The probability of selecting an individual with fitness is . At least every FUSS selection corresponds to a STD selection. Hence, we expect a maximal slowdown by a factor of , since FUSS “simulates” STD statistically every selection. It is possible to construct problems where this slowdown occurs (unimodal function, local mutation , no crossover). Gradient ascent would be the algorithm of choice in this case. On the other hand, we have not observed this slowdown in our simple 2D example and the TSP experiments, where FUSS outperformed STD in solution quality/time (see the experimental results in Section 12). Since real world problems often lie in between these extreme cases it is desirable to modify FUSS to cope with simple problems as well, without destroying its advantages for complex objective functions.9.2 Quadratic slowdown due to recombination
We have seen that . In the presence of recombination, a pair of individuals has to be selected. The probability that FUSS selects two individuals with fitness is . Hence, in the worst case, there could be a slowdown by a factor of — for independent selection we expect . This potential quadratic slowdown can be avoided by selecting one fitness value at random, and then two individuals of this single fitness value. For this dependent selection, we expect . On the other hand, crossing two individuals of different fitness can also be advantageous, like the crossing of with individuals in the 2D example of Section 7.
9.3 Scale independent selection
A near optimal compromise is possible: a high selection probability if and
otherwise. A “scale independent” probability distribution
is appropriate for this. We define(1) 
The in the denominator has been added to regularize the expression for . The factor ensures correct normalization (). By using , one can show that i.e. for . In the following we assume , i.e. . Apart from a minor additional logarithmic suppression of order we have the desired behavior for and otherwise:
During optimization, the minimal/maximal fitness of an individual in population is . In the definition of one has to use instead of , i.e. replaced with . So (1) can not be achieved by a static reparametrization of fitness replaced with . Furthermore the important idea of sampling from a fitness level instead of individuals directly is still maintained. The only difference now is that the population will no longer converge to a fitness uniform one but to one with distribution which is biased toward higher fitness but still never converges to a fittest individual. In the worst case, we expect a small slowdown of the order of as compared to FUSS, as well as compared to STD.
9.4 Scale independent pair selection
It is possible to (nearly) have the best of independent and dependent selection: a high selection probability if and otherwise, with uniform marginal
. The idea is to use a strongly correlated joint distribution for selecting a fitness pair. A “scale independent” probability distribution
is appropriate. We define the joint probability of selecting two individuals of fitness and and the marginal as(2) 
We assume in the following. The in the denominator has been added to regularize the expression for . The factor ensures correct normalization for . More precisely, using , one can show that
i.e. is not strictly normalized to and the marginal is only approximately (within a factor of 2) uniform. The first defect can be corrected by appropriately increasing the diagonal probabilities . This also solves the second problem.
(3) 
9.5 Properties of
is normalized to with uniform marginal
Apart from a minor additional logarithmic suppression of order we have the desired behavior for and otherwise:
During optimization, the minimal/maximal fitness of an individual in population is . In the definition of one has to use instead of , i.e. replaced with .
9.6 ScaleIndependent Deletion
Just as the selection scheme FUSS has its dual in the deletion scheme FUDS, we can likewise create the dual of ScaleIndependent Selection in the form of ScaleIndependent Deletion. Thus rather than targeting deletion from the population so that the distribution becomes flat, as we do with FUDS, we now define a convex curve which is peaked at the fittest individual in the population and delete the population down so that it follows the shape of this curve. This retains some of the advantages of FUDS, for example the population cannot collapse to just a few fitness levels, and yet it recognizes that for many problems it is useful to bias the population distribution toward fit individuals. Of course such problems are less deceptive than the kind that FUSS and FUDS are intended for.
10 Continuous Fitness Functions
10.1 Effective discretization scale
Up to now we have considered a discrete valued fitness function with values in . In many practical problems, the fitness function is continuous valued with . We generalize FUSS, and some of the discussion of the previous sections to the continuous case by replacing the discretization scale by an effective (timedependent) discretization scale . By construction, FUSS shifts the population toward a more uniform one. Although the fitness values are no longer equispaced, they still form a discrete set for finite population . For a fitness uniform distribution, the average distance between (fitness) neighboring individuals is . We define . .
10.2 Fuss
Fitness uniform selection for a continuous valued function has already been mentioned in Section 3. We just take a uniform random fitness in the interval . Independent and dependent fitness pair selection as described in the last section works analogously. An version of correlated selection does not exist; a nonzero is important. A discrete pair is drawn with probability as defined in (2) and (3) with and replaced by and . The additional suppression is small for all practically realizable population sizes. In all cases an individual with fitness nearest to () is selected from the population (randomly if there is more than one nearest individual).
If we assume a fitness uniform distribution then our worst case bound of is plausible, since the probability of selecting the best individual is approximately . For constant population size we get a bound . For the preferred nondeletion case with population size the bound gets much worse . This possible (but not necessary!) slowdown has similarities to the slowdown problems of proportionate selection in later optimization stages. The species definition in Section 8 has to be relaxed by allowing mutation sequences of individuals with similar fitness. Larger choices of may be favorable if the standard choice causes problems.
10.3 Fuds
Fitness uniform deletion already requires the range of the fitness function to be broken up into a finite number of intervals. While for discrete valued fitness functions the intervals may correspond to the unique values of the fitness function, this is not a requirement. Indeed if the population is small and the fitness function has a large number of possible values then a more coarse discretization is necessary. Continuous valued fitness functions can therefore be treated in exactly the same way and do not cause any special problems. In fact they are slightly simpler in that we are now free to choose the discretization as fine as we like without being limited by the number of possible fitness values. Of course, like in the discrete case, we still must choose a discretization which is appropriate given the size of the population.
11 The EA Test System
To test FUSS and FUDS we have implemented an EA test system in Java. The complete source code along with the test problems presented in this paper and basic usage instructions can be downloaded from [Leg04]. The EA model we have chosen for our tests is the so called “steady state” model as opposed to the more usual “generational” model. In a generational EA in each generation we select an entirely new population based on the old population. The old population is then simply discarded. Under the steady state model that we use, each step of the optimization adds and removes just one individual at a time. Specifically the process occurs as follows: Firstly an individual is selected by the selection scheme and then with a certain probability another individual is also selected and the crossover operator is applied to produce a new individual. Then with another probability a mutation operator is applied to produce the child individual which is then added to the population. We refer to the probability of crossing as the crossover probability and the probability of mutating following a crossover as the mutation probability. In the case where no crossover takes place the individual is always mutated to ensure that we are not simply adding a clone of an existing individual into the population. Finally, an individual must be deleted in order to keep the population size constant. This individual is selected by the deletion scheme. The deletion scheme is important as it has the power to bias the population in a similar way to the selection scheme.
Our task in this paper is to experimentally analyze how FUSS performs relative to other selection schemes and how FUDS performs relative to other deletion schemes. Because any particular run of a steady state EA requires both a selection and a deletion scheme to be used, there are many possible combinations that we could test. We have narrowed this range of possibilities down to just a few that are commonly used.
Among the selection schemes, tournament selection is one of the simplest and most commonly used and we consider it to be roughly representative of other standard selection schemes which favor the fitter individuals in the population; indeed in the case of tournament size 2 it can be shown that tournament selection is equivalent to the linear ranking selection scheme [Hut91, Sec.2.2.4]. With tournament selection we randomly pick a group of individuals and then select the fittest individual from this group. The size of the group is called the tournament size and it is clear that the larger this group is the more likely we are to select a highly fit individual from the population. At some point in the future we may implement other standard selection schemes to broaden our comparison, however we expect the performance of these schemes to be at best comparable to tournament selection when used with a correctly tuned selection intensity.
Among the deletion schemes one of the most commonly used in steady state EAs is random deletion. The rational for this is that it is neutral in the sense that it does not skew the distribution of the population in any way. Thus whether the population tends toward high or low fitness etc. is solely a function of the selection scheme and its settings. Of course random deletion, unlike FUDS, makes no effort to preserve diversity in the population as all individuals have an equal chance of being removed. In this paper we will compare FUDS against random deletion as this is the standard deletion schemes in situations where it is difficult or impossible to directly measure the similarity of individuals based on their genomes.
When reporting test results we will adopt the following notation: TOUR2 means tournament selection with a tournament size of 2. Similarly for TOUR3, TOUR4 and so on. Under random selection, denoted RAND, all members of the population have an equal probability of being selected. This is sometimes called uniform selection. When a graph shows the performance of tournament selection over a range of tournament sizes we will simply write TOURx. Naturally FUSS indicates the fitness uniform selection scheme. To indicate the deletion scheme used we will add either the suffix R or F to indicate random deletion or FUDS respectively. Thus, TOUR10R is tournament selection with a tournament size of 10 used with random deletion, while FUSSF is FUSS selection used with FUDS deletion.
The important free parameters to set for each test are the population size, and the crossover and mutation probabilities. Good values for the crossover and mutation probabilities depend on the problem and must be manually tuned based on experience as there are few theoretical guidelines on how to do this. For some problems performance can be quite sensitive to these values while for others they are less important. Our default values are 0.5 for both as this has often provided us with reasonable performance in the past.
For each test we ran the system multiple times with the same mutation and crossover probabilities and the same population size. The only difference was which selection and deletion schemes were used by the code. Thus even if our various parameters, mutation operators etc. were not optimal for a given problem, the comparison is still fair. Indeed we often deliberately set the optimization parameters to nonoptimal values in order to compare the robustness of the systems.
As a steady state optimizer operates on just one individual at a time, the number of cycles within a given run can be high, perhaps 100,000 or more. In order to make our results more comparable to a generational optimizer we divide this number by the size of the population to give the approximate number of generations. Unfortunately the theoretical understanding of the relationship between steady state and generational optimizers is not strong. It has been shown that under the assumption of no crossover the effective selection intensity using tournament selection with size 2 is approximately twice as strong under a steady state EA as it is with a generational EA [RPB99b]. As far as we are aware a similar comparison for systems with crossover has not been performed.
Depending on the purpose of a test run, different stopping criteria were applied. For example, in situations where we wanted to graph how rapidly different strategies converged with respect to generations, it made sense to fix the number of generations. In other situations we wanted to stop a run once the optimizer appeared to have become stuck, that is, when the maximum fitness had not improved after some specified number of generations. In any case we explain for each test the stopping criterion that has been used.
In order to generate reliable statistics we ran each test multiple times; typically 30 times but sometimes up to 100 times if the results were noisy. From these runs we then calculated the mean performance as well as the sample standard deviation and from this the standard error in our estimate of the mean. This value was then used to generate the 95% confidence intervals which appear as error bars on the graphs.
12 A Deceptive 2D Problem
The first problem we examine is the simple but highly deceptive 2D optimization problem which was theoretically analyzed in Section 7. As in the theoretical analysis, we set up the mutation operator to randomly replace either the or position of an individual and the crossover to take the position from one individual and the position from another to produce an offspring. The size of the domain for which the function is maximized is just which is very small for small values of , while the local maxima at fitness level 3 covers most of the space. Clearly the only way to reach the global maximum is by leaving this local maximum and exploring the space of individuals with lower fitness values of 1 or 2. Thus, with respect to the mutation and crossover operators we have defined, this is a deceptive optimization problem as these partitions mislead the EA [FM93].
For this test we set the maximum population size to 1,000 and made 20 runs for each value. With a steady state EA it is usual to start with a full population of random individuals. However for this particular problem we reduced the initial population size down to just 10 in order to avoid the effect of doing a large random search when we created the initial population and thereby distorting the scaling. Usually this might create difficulties due to the poor genetic diversity in the initial population. However due to the fact that any individual can mutate to any other in just two steps this is not a problem in this situation. Initial tests indicated that reducing the crossover probability from 0.5 to 0.25 improved the performance slightly and so we have used the latter value.
The first set of results for the selection schemes used with random deletion appear in the left graph of Figure 6. As expected, higher selection intensity is a significant disadvantage for this problem. Indeed even with just a tournament size of 3 the number of generations required to find the maximum became infeasible to compute for smaller values of . Our results confirm the theoretical scaling orders of for TOUR2R, and for FUSSR, as predicted in Section 7. Be aware that this is a loglog scaled graph and so the different slopes indicate significantly different orders of scaling.
In the second set of tests we switch from random deletion to FUDS. These results appear in the right graph of Figure 6. We see that with FUDS as the deletion scheme the scaling improves dramatically for RAND, TOUR2 and TOUR3. Indeed they are now of the same order as FUSS, as predicted in Section 7. This shows that for very deceptive problems much higher levels of selection intensity can be applied when using FUDS rather than random deletion. The performance of FUSSR is very similar to that of FUSSF. This is not surprising as the population distribution under FUSS already tends to be approximately uniform across fitness levels and thus we expect the effect of FUDS to be quite weak.
Although this problem was artificially constructed, the results clearly demonstrate how FUSS and FUDS can dramatically improve performance in some situations.
13 Traveling Salesman Problem
A well known optimization problem is the so called Traveling Salesman Problem (TSP). The task is to find the shortest Hamiltonian cycle (path) in a graph of vertexes (cities) connected by edges of certain lengths. There exist highly specialized population based optimizers which use advanced mutation and crossover operators and are capable of finding paths less than one percent longer than the optimal path for up to cities [LK73, MO96, JM97, ACR00]. As our goal is only to study the relative performance of selection and deletion schemes, having a highly refined implementation is not important. Thus the mutation and crossover operators we used were quite simple: Mutation was achieved by just switching the position of two of the cities in the solution, while for crossover we used the partial mapped crossover technique [GA85]. Fitness was computed by taking the reciprocal of the tour length.
For our first set of tests we used randomly generated TSP problems, that is, the distance between any two cities was chosen uniformly from the unit interval . We chose this as it is known to be a particularly deceptive form of the TSP problem as the usual triangle inequality relation does not hold in general. For example, the distance between cities and might be , between cities and , and yet the distance between and might be . The problem still has some structure though as efficient partial solutions tend to be useful building blocks for efficient complete tours.
For this test we used random distance TSP problems with 20 cities and a population size of 1000. We found that changing the crossover and mutation probabilities did not improve performance and so these have been left at their default values of 0.5. Our stopping criterion was simply to let the EA run for 300 generations as this appeared to be adequate for all of the methods to converge and allowed us to easily graph performance versus generations.
The first graph in Figure 7 shows each of the selection schemes used with random deletion. We see that TOUR3R has insufficient selection intensity for adequate convergence while TOUR12R quickly converges to a local optimum and then becomes stuck. TOUR6R has about the correct level of selection intensity for this problem and population size. FUSSR however initially converges as rapidly as TOUR12R but avoids becoming stuck in local optima. This suggests improved population diversity. The performance curve for FUSSR is impressive, especially considering that it is parameterless.
At first it might seem surprising that the maximum fitness with FUSS climbs very quickly for the first 20 generations, especially considering that FUSS makes no attempt to increase the average fitness in the population. However we can explain this very rapid rise in solution fitness by considering a simple example. Consider a situation where there is a large number of individuals in a small band of fitness levels, say 1,000 with fitness values ranging from 50 to 70. Add to this population one individual with a fitness value of 73. Thus the total fitness range contains 24 values. Whenever FUSS picks a random point from 72 to 73 inclusive this single individual with maximal fitness will be selected. That is, the probability that the single fittest individual will be selected is 2/24 = 0.083. In comparison under TOUR12 the probability that the fittest individual is selected is the same as the probability that it is picked for the sample of 12 elements used for the tournament, which is approximately, 12/1000 = 0.012. Thus the probability of the fittest individual in the population being selected is higher under FUSS than under TOUR12 and so the maximum fitness would rise quickly to start with.
Previously in [LHK04] we speculated that this may have been responsible for performance problems that we had observed with FUSS in some situations. However further experimentation has shown that very rapid rises in maximal fitness are quite rare and are also very shortly lived when they do occur — too short to cause any significant diversity problems in the population. We now believe that the population distribution is to blame in these situations; something that we will explore in detail in Section 15.
The second graph in Figure 7 shows the same set of selection schemes but now using FUDS as the deletion scheme. With FUDS the performance of all of the selection schemes either stayed the same or improved. In the case of TOUR3 the improvement was dramatic and for TOUR12 the improvement was also quite significant. This is interesting because it shows that with fitness uniform deletion, performance can improve when the selection intensity is either too high or too low. That is, when using FUDS the performance of the EA now appears to be more robust with respect to variation in selection intensity.
In the case of TOUR12F this is evidence of improved population diversity as the EA is no longer becoming stuck. However for TOUR3R the selection intensity is quite low and thus we would expect the population diversity to be relatively good. Thus the fact that TOUR3F was so much better than TOUR3R suggests that FUDS can have significant performance benefits that are not related to improved population diversity.
Investigating further it seems that this effect is due to the way that FUDS focuses the deletion on the large mass of individuals which have an average level of fitness while completely leaving the less common fit individuals alone. This helps a system with very weak selection intensity move the mass of the population up through the fitness space. With higher selection intensity this problem tends not to occur as individuals in this central mass are less likely to be selected thus reducing the rate at which new individuals of average fitness are added to the population.
In order to better understand how stable FUDS performance is when used with different selection intensities we ran another set of tests on random TSP problems with 20 cities and graphed how performance varied by tournament size. For these tests we set the EA to stop each run when no improvement had occurred in 40 generations. We also tested on a range of population sizes: 250, 500, 1000 and 5000. The results appear in Figure 8.
In these graphs we can now clearly see how the performance of TOURxR varies significantly with tournament size. Below the optimal tournament size performance worsened quickly while above this value it also worsened, though more slowly. Interestingly, with a population size of 5000 the optimal tournament size was about 6, while with small populations the optimal value fell to just 4. Presumably this was partly because smaller populations have lower diversity and thus cannot withstand as much selection intensity.
In contrast FUSSR and FUSSF appear as horizontal lines as they do not have a tournament size parameter. We see that they have performed as well as the optimal performance of TOURxR without requiring any tuning. Indeed for larger populations FUSSR appears to be even better than the optimally tuned performance of TOURxR. This is a very positive result for the parameterless FUSS.
Comparing FUDS with random deletion we also see impressive results. For every combination of selection scheme, tournament size and population size the result with FUDS was better than the corresponding result with random deletion, and in some cases much better. Furthermore these graphs clearly display the improved robustness of tournament selection with FUDS as TOURxF produced near optimal results for all tournament sizes. Even with an optimally tuned tournament size FUDS increased performance, particularly with the smaller populations. Indeed for each population size tested the worst performance of TOURxF was equal to the best performance of TOURxR.
With FUSS there was also a performance advantage when using FUDS, again more so with the smaller populations. The combination of both FUSS and FUDS was especially effective as can be seen by the consistently superior performance of FUSSF across all of the graphs.
More tests were run exploring performance with up to 100 cities. Although the performance of FUDS remained stronger than random deletion for very low selection intensity, for high selection intensity the two were equal. We believe that the reason for this is the following: When the space of potential solutions is very large finding anything close to a global optimum is practically impossible, indeed it is difficult to even find the top of a reasonable local optimum as the space has so many dimensions. In these situations it is more important to put effort into simply climbing in the space rather than spreading out and trying to thoroughly explore. Thus higher selection intensity can be an advantage for large problem spaces. At any rate, for large problems and with high selection intensity FUDS did not appear to hinder the performance, while with low selection intensity it continued to significantly improve it.
Experiments were also performed using the more efficient “2Opt” mutation operator. As expected, this increased performance and allowed much higher selection pressure to be used. Of course the problem then no longer had the kind of deceptive structure that heavily punishes high selection pressure that we are looking for. Nevertheless, FUDS continued to significantly boost the performance of tournament selection, in particular when the tournament size was too small.
14 Set Covering Problem
The set covering problem (SCP) is a reasonably well known NPcomplete optimization problem with many real world applications. Let be a binary valued matrix and let for be the cost of column . The goal is to find a subset of the columns such that the cost is minimized. Define if column is in our solution and 0 otherwise. We can then express the cost of this solution as subject to the condition that for .
Our system of representation, mutation operators and crossover follow that used by Beasley [BC96] and we compute the fitness by taking the reciprocal of the cost. The results presented here are based on the “scp42” problem from a standard collection of SCP problems [Bea03]. The results obtained on other problems in this test set were similar. We found that increasing the crossover probability and reducing the mutation probability improved performance, especially when the selection intensity was low. Thus we have tested the system with a crossover probability of 0.8 and a mutation probability of 0.2. We performed each test at least 50 times in order to minimize the error bars. Our stopping criterion was to terminate each run after no improvement in minimal cost had occurred for 40 generations. The results for this test appear in Figure 9.
Similar to the TSP graphs we again see the importance of correctly tuning the tournament size with TOURxR. We also see the optimal range of performance for TOURxR moving to the right as the population sizes increases. This is what we would expect due to the greater diversity in larger populations. This kind of variability is one of the reasons why the selection intensity parameter usually has to be determined by experimentation.
Unlike with TSP however, the performance of FUSS was less convincing in these results. With the smaller populations of 250 and 500 FUSSR was only better than TOURxR when the tournament size was very low or very high. With the larger populations of 1,000 and 5,000 the results were much better with FUSSR performing as well as the optimal performance of TOURxR. FUSSF performed better than FUSSR, in particular with the smaller populations though this improvement was still insufficient for it to match the optimal performance of TOURxR in these cases. The fact that the performance of FUSS varied by population size suggests that FUSS might be experiencing some kind of population diversity problem. We will look more carefully at diversity issues in the next section.
With FUDS the results were again very impressive. As with the TSP tests; for all combinations of selection scheme, tournament size and population size that we tested, the performance with FUDS was superior to the corresponding performance with random deletion. This was true even when the tournament size was optimal. While the performance of TOURxF did vary significantly with different tournament sizes, the results were more robust than TOURxR, especially with the larger populations. Indeed for the larger two populations we again have a situation where the worst performance of TOURxF is equal to the optimal performance of TOURxR.
15 Maximum CNF3 SAT
Maximum CNF3 SAT is a well known NP hard optimization problem [CK03] that has been extensively studied. A three literal conjunctive normal form (CNF) logical equation is a boolean equation that consists of a conjunction of clauses where each clause contains a disjunction of three literals. So for example, is a CNF3 expression. The goal in the maximum CNF3 SAT problem is to find an instantiation of the variables such that the maximum number of clauses evaluate to true. Thus for the above equation if , , , , and then just one clause evaluates to true and thus this instantiation gets a score of one. Achieving significant results in this area would be difficult and this is not our aim; we are simply using this problem as a test to compare selection and deletion schemes.
Our test problems have been taken from the SATLIB collection of SAT benchmark tests [HS00]. The first test was performed on the full set of 100 instances of randomly generated CNF3 formula with 150 variables and 645 clauses, all of which are known to be satisfiable. Based on test results the crossover and mutation probabilities were left at the default values. Our mutation operator simply flips one boolean variable and the crossover operator forms a new individual by randomly selecting for each variable which parent’s state to take. Fitness was simply taken to be the number of classes satisfied. Again we tested across a range of tournament sizes and population sizes. The results of these tests appear in Figure 10.
We have shown only the population sizes of 500 and 5,000 as the other population sizes tested followed the same pattern. Interestingly for this problem there was no evidence of better performance with FUDS at higher selection intensities. Nor for that matter was there the decline in performance with TOURxR that we have seen elsewhere. Indeed with random deletion the selection intensity appeared to have no impact on performance at all. While SAT3 CNF is an NP hard optimization problem, this lack of dependence of our selection intensity parameter suggests that it may not have the deceptive structure that FUSS and FUDS are designed for.
With low selection intensity FUDS caused performance to fall below that of random deletion; something that we have not seen before. Because the advantages of FUDS have been more apparent with low populations in other test problems, we also tested the system with a population size of only 150. Unfortunately no interesting changes in behavior were observed.
While FUDS had minor difficulties, FUSS had serious problems for all the population sizes that we tested. We suspected that the uniform nature of the population distribution that should occur with both FUSS and FUDS might be to blame as we only expect this to be a benefit for very deceptive problems which are sensitive to the tuning of the selection intensity parameter. Thus we ran the EA with a population of 1000 and graphed the population distribution across the number of clauses satisfied at the end of the run. We stopped each run when the EA made no progress in 40 generations. The results of this appear in Figure 11.
The first thing to note is that with TOUR4R the population collapses to a narrow band of fitness levels, as expected. With TOUR4F the distribution is now uniform, though practically none of the population satisfies fewer than 550 clauses. The reason for this is quite simple: While FUDS levels the population distribution out, TOUR4 tends to select the most fit individuals and thus pushes the population to the right from its starting point. In contrast, FUSS pushes the population toward currently unoccupied fitness levels. This results in the population spreading out in both directions and so the number of individuals with extremely poor fitness is much higher.
Given that our goal is to find an instantiation that satisfies all 645 clauses, it is questionable whether having a large percentage of the population unable to satisfy even 600 clauses is of much benefit. While the total population diversity under FUSSF might be very high, perhaps the kind of diversity that matters the most is the diversity among the relatively fit individuals in the population. This should be true for all but the most excessively deceptive problems. By thinly spreading the population across a very wide range of fitness levels we actually end up with very few individuals with the kind of diversity that matters. Of course this depends on the nature of the problem we are trying to solve and the fitness function that we use.
Fortunately with CNF3 SAT we can directly measure population diversity by taking the average hamming distance between individuals’ genomes. While this means that the value of the fitness based similarity metric is questionable for this problem, as more direct methods like crowding can be applied, it is a useful situation for our analysis as it allows us to directly measure how effective FUSS and FUDS are at preserving population diversity. The hope of course is that any positive benefits that we have seen here will also carry over to problems where directly measuring the diversity is problematic.
For the diversity tests we used a population size of 1000 again. For comparison we used FUSS, TOUR3 and TOUR12 both with random deletion and with FUDS. In each run we calculated two different statistics: The average hamming distance between individuals in the whole population, and the average hamming distance between individuals whose fitness was no more than 20 below the fittest individual in the population at the time. These two measurements give us the “total population diversity” and “high fitness diversity” graphs in Figure 12.
We graphed these measurements against the solution cost of the fittest individual rather than the number of generations. This is only fair because if good solutions are found very quickly then an equally rapid decline in diversity is acceptable and to be expected. Indeed it is trivial to come up with a system which always maintains high population diversity how ever long it runs, but is unlikely to find any good solutions. The results were averaged over all 100 problems in the test set. Because the best solution found in each run varied we have only graphed each curve until such a point where fewer than 50% of the runs were able to achieve this level of fitness. Thus the terminal point at the right of each curve is representative of fairly typical runs rather than just a few exceptional ones that perhaps found unusually good solutions by chance.
The top two graphs in Figure 12 show the total population diversity. As expected the diversity with TOUR3R and TOUR12R decline steadily as finding better solutions becomes increasingly difficult and the population tends to collapse into a narrow band of fitness. As we would expect, the total population diversity with TOUR3R is higher than with TOUR12R. While FUSSR declines initially it then stabilizes at around 50 before becoming stuck. As the TOUR3R and TOUR12R curves both extend further to the right, even though the total population diversity becomes quite low, this show that diversity problems in the population as a whole are not a significant factor behind the performance problems with FUSSR.
The top right graph shows the same selection schemes, but this time with FUDS. As expected FUDS has significantly improved the total population diversity with both TOUR3 and TOUR12, while having little impact on FUSS which already has a relatively flat population distribution. As the maximal solution found by TOUR3F and TOUR12F were not better than TOUR3R and TOUR12R, this indicates that improved total population diversity is not a significant factor in the performance of the EA for this type of optimization problem. That FUDS has lifted the total diversity for TOUR3 and TOUR12 so that they are now above FUSSF, is particularly interesting. This suggests that while FUSS has high total population diversity, there appears to be some more subtle effects that are causing the diversity to be lower than it could be. It may be related to the fact the FUSS sometimes heavily selects from small groups within the population during the early stages of the optimization process, as we noted in Section 13. However we are not certain whether this is occurring in this case.
On the lower set of graphs we see the diversity among the fitter individuals in the population; specifically those whose fitness is no more than 20 below the fittest individual in the population at the time. On the first graph on the left we see that TOUR3 has significantly greater diversity than TOUR12 with both deletion schemes. This is expected as TOUR3 tends to search more evolutionary paths while TOUR12 just rushes down a few. Disappointingly FUDS does not appear to have made much difference to the diversity among these highly fit individuals, though the curves do flatten out a little as the diversity drops below 30, so perhaps FUDS is having a slight impact.
For both FUSSR and FUSSF the diversity among the fit individuals was poor, indeed it was even worse than TOUR12 for both deletion schemes. Thus, while the total population diversity with FUSS tends to be high, the diversity among the fittest individuals in the population can be quite poor. Furthermore, the curves for high fitness diversity all end once the diversity drops into the 12 to 17 range. As this pattern was absent from the graphs of total population diversity, this indicates that it is indeed the diversity among the relatively fit individuals in the population that most determines when the EA is going to become stuck.
In summary, these results show that while FUSS has been successful in maximizing total population diversity, for problems such as CNF3 SAT this is not sufficient. It appears to be more important that the EA maximizes the diversity among those individuals which have higher fitness and in this regard FUSS is poor, which leads to poor performance. This is most likely a characteristic of optimization problems which, while still difficult, are not as deceptive as SCP or random TSP.
16 Conclusions and Future Research Directions
We have addressed the problem of balancing the selection intensity in EAs, which determines speed versus quality of a solution. We invented a new fitness uniform selection scheme FUSS. It generates a selection pressure toward sparsely populated fitness levels. This property is unique to FUSS as compared to other selection schemes (STD). It results in the desired high selection pressure toward higher fitness if there are only a few fit individuals. The selection pressure is automatically reduced when the number of fit individuals increases. We motivated FUSS as a scheme which bounds the number of similar individuals in a population. We defined a universal similarity relation solely depending on the fitness, independent of the problem structure, representation and EA details. We showed analytically by way of a simple example that FUSS can be much more effective than STD. A joint pair selection scheme for recombination has been defined. A heuristic worst case analysis of FUSS compared to STD has been given. For this, the fitness tree model has been defined, which is an interesting analytic tool in itself. FUSS solves the problem of population takeover and the resulting loss of genetic diversity of STD, while still generating enough selection pressure. It does not help in getting a more uniform distribution within a fitness level.
We have also invented a related system called FUDS which achieves a similar effect to FUSS except that it works through deletion rather than through selection. This means that FUDS shares many of the important characteristics of FUSS including strong total population diversity and the impossibility of population collapse. We showed analytically that for a simple deceptive optimization problem the performance of STD when used with FUDS scales similarly to FUSS.
A test system has been constructed and used to evaluate the empirical performance of both FUSS and FUDS on a range of optimization problems with different population sizes, mutation probabilities and crossover probabilities. Their performance has been compared to the more standard methods of tournament selection and random deletion. For the artificial deceptive 2D optimization problem and random distance matrix TSP problems both FUSS and FUDS performed extremely well. For the deceptive 2D problem they dramatically improved the scaling exponent in the number of generations needed to find the global optimum. For the TSP problems FUSSR performed as well as optimally tuned TOURxR for all population sizes, and FUDS caused TOURx to perform near optimally for all tournament sizes and population sizes.
With SCP problems with small populations the performance of FUSSR was only better than TOURxR when the tournament size was poorly set. For populations larger than 1,000 however, FUSSR continued to perform as well as the optimal results for TOURxR. FUDS was again consistently superior returning better results than random deletion for every combination of selection scheme, tournament size and population size tested.
For CNF3 SAT problems we ran into difficulties however. While FUDS significantly improved the performance of FUSS, it was inferior to random deletion for low selection intensities. In other cases the performance was comparable. FUSS however had serious performance problems. Further investigations revealed that this appears to be due to the small number of individuals in the population that have relatively high fitness when using FUSS. We measured the diversity in the population and found that while the total population diversity with FUSS was high, the diversity among the fit individuals was relatively poor. This produced a serious diversity problem in the population when combined with the fact that there are relatively few individuals of high fitness when using FUSS.
As the performance of TOURxR was not impacted by high selection intensity on the CNF3 SAT problem this indicates that this problem does not have the kind of deceptive nature that harshly punishes greedy exploration that we were looking for. Perhaps for such problems a less extreme approach is called for. For example, rather than trying to spread the population across all fitness levels uniformly we should instead control the distribution so that it is biased toward high fitness but never collapses totally as it does with TOURxR.
We have experimented with a deletion scheme which deletes the population distribution down to a convex curve peaked at the fittest individual in the population. This is the deletion equivalent of the scale independent selection scheme described in Section 9. Our results thus far indicate that the performance is equal or slightly superior to random deletion in all situations. However the dramatic improvements that FUDS has over random deletion in some cases are now less significant.
Another possibility is to manipulate the fitness function to effectively achieve the same thing. For example, we have found that by taking the fitness to be the reciprocal of the number of unsatisfied clauses in the CNF3 SAT problem the performance of FUSS improves significantly, indeed it is then comparable to TOURx. Perhaps however it would be better to avoid these performance tricks and instead focus on extremely deceptive problems where high selection intensity is heavily punished, that is, the kinds of problems that FUSS and FUDS were specifically designed for.
16.1 Acknowledgments
This work was supported by SNF grants 210067712.02 and 200020107616.
References
 [ACR00] D. Applegate, W. Cook, and A. Rohe. Chained LinKernighan for large traveling salesman problems. Technical report, Department of Computational and Applied Mathematics, Rice University, Houston, TX, 2000.
 [Bak85] J. E. Baker. Adaptive selection methods for genetic algorithms. In Proc. 1st International Conference on Genetic Algorithms and their Applications, pages 101–111, Pittsburgh, PA, 1985. Lawrence Erlbaum Associates.
 [BC96] J. Beasley and P. Chu. A genetic algorithm for the set covering problem. European Journal of Operational Research, 94:392–404, 1996.
 [Bea03] J. Beasley. Orlibrary. mscmga.ms.ic.ac.uk/jeb/orlib/scpinfo.html, 2003.
 [BHS91] T. Bäck, F. Hoffmeister, and H. P. Schwefel. A survey of evolution strategies. In Proc. 4th International Conference on Genetic Algorithms, pages 2–9, San Diego, CA, July 1991. Morgan Kaufmann.
 [BT95] T. Blickle and L. Thiele. A mathematical analysis of tournament selection. In Proc. Sixth International Conference on Genetic Algorithms (ICGA’95), pages 9–16, San Francisco, California, 1995. Morgan Kaufmann Publishers.
 [BT97] T. Blickle and L. Thiele. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation, 4(4):361–394, 1997.
 [Cav70] D. J. Cavicchio. Adaptive search using simulated evolution. PhD thesis, Unpublished doctoral dissertation, University of Michigan, Ann Arbor, 1970.
 [CJ91] R. J. Collins and D. R. Jefferson. Selection in massively parallel genetic algorithms. In Proc. Fourth International Conference on Genetic Algorithms, San Mateo, CA, 1991. Morgan Kaufmann Publishers.
 [CK03] P. Crescenzi and V. Kann. A compendium of NP optimization problems. www.nada.kth.se/viggo/problemlist/compendium.html, 2003.
 [EHM99] A. E. Eiben, R. Hinterding, and Z. Michalewicz. Parameter control in evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 3(2):124–141, 1999.
 [Esh91] L. J. Eshelman. The CHC adaptive search algorithm: How to safe search when engaging in nontraditional genetic recombination. In G. J. E. Rawlings, editor, Foundations of genetic algorithms, pages 265–283. Morgan Kaufmann, San Mateo, 1991.
 [FM93] S. Forrest and M. Mitchell. What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation. Machine Learning, 13(2–3):285–319, 1993.
 [GA85] D. Goldberg and R. Lingle. Alleles. Loci and the traveling salesman problem. In Proc. International Conference on Genetic Algorithms and their Applications, pages 154–159. Lawrence Erlbaum Associates, 1985.
 [GD91] D. E. Goldberg and K. Deb. A comparative analysis of selection schemes used in genetic algorithms. In G. J. E. Rawlings, editor, Foundations of genetic algorithms, pages 69–93. Morgan Kaufmann, San Mateo, 1991.
 [Gol89] D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. AddisonWesley, Reading, Mass., 1989.
 [GR87] D. E. Goldberg and J. Richardson. Genetic algorithms with sharing for multimodal function optimization. In Proc. 2nd International Conference on Genetic Algorithms and their Applications, pages 41–49, Cambridge, MA, July 1987. Lawrence Erlbaum Associates.
 [Her92] M. Herdy. Reproductive isolation as strategy parameter in hierarchically organized evolution strategies. In Parallel problem solving from nature 2, pages 207–217, Amsterdam, 1992. NorthHolland.
 [Hol75] John H. Holland. Adpatation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI, 1975.
 [HS00] H. H. Hoos and T. Stützle. SATLIB: An Online Resource for Research on SAT. In SAT 2000, pages 283–292. IOS press, 2000.
 [Hut91] M. Hutter. Implementierung eines KlassifizierungsSystems. Master’s thesis, Theoretische Informatik, TU München, 1991. 72 pages with C listing, in German, http://www.idsia.ch/marcus/ai/pcfs.htm.
 [Hut02] M. Hutter. Fitness uniform selection to preserve genetic diversity. In Proc. 2002 Congress on Evolutionary Computation (CEC2002), pages 783–788, Washington D.C, USA, May 2002. IEEE.

[JM97]
D. S. Johnson and A. McGeoch.
The traveling salesman problem: A case study.
In E. H. L. Aarts and J. K. Lenstra, editors,
Local Search in Combinatorial Optimization
, Discrete Mathematics and Optimization, chapter 8, pages 215–310. WileyInterscience, Chichester, England, 1997.  [Jon75] K. de Jong. An analysis of the behavior of a class of genetic adaptive systems. Dissertation Abstracts International, 36(10), 5140B, 1975.
 [Leg04] S. Legg. Website. www.idsia.ch/shane, 2004.
 [LH05] S. Legg and M. Hutter. Fitness uniform deletion for robust optimization. In Proc. Genetic and Evolutionary Computation Conference (GECCO’05), pages 1271–1278, Washington, OR, 2005. ACM SigEvo.
 [LHK04] S. Legg, M. Hutter, and A. Kumar. Tournament versus fitness uniform selection. In Proc. 2004 Congress on Evolutionary Computation (CEC’04), pages 2144–2151, Portland, OR, 2004. IEEE.
 [LK73] S. Lin and B. W. Kernighan. An effective heuristic for the travelling salesman problem. Operations Research, 21:498–516, 1973.
 [MO96] O. Martin and S. Otto. Combining simulated annealing with local search heuristics. Annals of Operations Research, 63:57–75, 1996.
 [MSV94] Heinz Mühlenbein and Dirk SchlierkampVoosen. The science of breeding and its application to the breeder genetic algorithm (BGA). Evolutionary Computation, 1(4):335–360, 1994.
 [MT93] M. de la Maza and B. Tidor. An analysis of selection procedures with particular attention paid to proportional and Boltzmann selection. In Proc. 5th International Conference on Genetic Algorithms, pages 124–131, San Mateo, CA, USA, 1993. Morgan Kaufmann.
 [RPB99a] A. Rogers and A. PrügelBennett. Genetic drift in genetic algorithm selection schemes. IEEE Transactions on Evolutionary Computation, 3(4):298–303, 1999.
 [RPB99b] A. Rogers and A. PrügelBennett. Modelling the dynamics of a steadystate genetic algorithm. In Wolfgang Banzhaf and Colin Reeves, editors, Foundations of Genetic Algorithms 5, pages 57–68. Morgan Kaufmann, San Francisco, CA, 1999.
 [Rud00] Günter Rudolph. On takeover times in spatially structured populations: array and ring. In K. K. Lai, O. Katai, M. Gen, and B. Lin, editors, Proceedings of the Second AsiaPacific Conference on Genetic Algorithms and Applications (APGA’00), pages 144–151, Hong Kong, PR China, 2000. GlobalLink Publishing Company.
 [SVM94] D. SchlierkampVoosen and H. Mühlenbein. Strategy adaptation by competing subpopulations. In Parallel Problem Solving from Nature – PPSN III, pages 199–208, Berlin, 1994. Springer. Lecture Notes in Computer Science 866.
 [Whi89] D. Whitley. The GENITOR algorithm and selection pressure: Why rankbased allocation of reproductive trials is best. In Proc. Third International Conference on Genetic Algorithms (ICGA’89), pages 116–123, San Mateo, California, 1989. Morgan Kaufmann Publishers, Inc.
Comments
There are no comments yet.