I Introduction
The knapsack problem is an NPhard combinatorial optimisation problem [1]
, which includes a variety of knapsacktype problems such as the 01 knapsack problem and multidimensional knapsack problem. In the last two decades, evolutionary algorithms (EAs), especially genetic algorithms (GAs), have been welladopted for tackling the knapsack problem
[2, 3, 4, 5]. The problem has received a particular interest from the evolutionary computation community for the following two reasons. The first reason is that the binary vector representation of the candidate solutions is a natural encoding of the 01 knapsack problem’s search space. Thereby, it provides an ideal setting for the applications of genetic algorithms
[6]. On the other hand, the multidimensional knapsack problem is a natural multiobjective optimization problem, so that it is often taken as a test problem for studying multiobjective optimization evolutionary algorithms (MOEAs) [7, 8, 9, 10, 11].A number of empirical results in the literature (see, for instance, [7, 8, 9, 10, 11, 12]) assert that EAs can produce “good” solutions to the knapsack problem. A naturally arising question is then how to measure the “goodness” of solutions that EAs may produce? To address the question, the most popular approach is to compare the quality of the solutions generated by EAs via computer experiments. For example, the solution quality of an EA is measured by the best solution found within 500 generations [6]. Such a comparison may help to compare performance of different EAs, yet it seldom provides any information regarding the proximity of the solutions produced by the EAs to the optimum.
From the viewpoint of algorithm analysis, it is important to assess how “good” a solution is in terms of the notion of approximation ratio (see [13]). There are several effective approximation algorithms for solving the knapsack problem [1]. For example, a fully polynomial time approximation scheme for the 01 knapsack problem has been presented in [14]. Nonetheless, very few rigorous investigations addressing the approximation ratio of EAs on the 01 knapsack problem exist. [15] recast the 01 knapsack problem into a biobjective knapsack problem with two conflicting objectives (maximizing profits and minimizing weights). A (1+)approximate set of the knapsack problem has been introduced for the biobjective optimization problem. An MOEA, called Restricted Evolutionary Multiobjective Optimizer, has been designed to obtain the (1+)approximate set. A pioneering contribution of [15] is a rigorous runtime analysis of the proposed MOEA.
The current paper focuses on investigating the approximation ratio of three types of EAs combining bitwise mutation, truncation section and diverse repair mechanisms for the 01 knapsack problem. The first type is several pure strategy EAs, where a single repair method is exploited in the EAs. The second type is several mixed strategy EAs, which choose a repair method from a repair method pool randomly. The third type is a multiobjective EA using helper objectives, which is a simplified version of the EA in [16].
The remainder of the paper is organized as follows. The 01 knapsack problem is introduced in section II. In section III we analyse pure strategy EAs, while in section IV we analyse mixed strategy EAs. Section V is devoted to analysing an MOEA using helper objectives. Section VI concludes the article.
Ii Knapsack Problem and Approximation Solution
The 01 knapsack problem is the most important knapsack problem and one of the most intensively studied combinatorial optimisation problems [1]. Given an instance of the 01 knapsack problem with a set of weights , profits , and capacity of a knapsack, the task is to find a binary vector so as to
(1) 
where if the item is selected in the knapsack and if the item is not selected in the knapsack. A feasible solution is a knapsack represented by a binary vector which satisfies the constraint. An infeasible one is an that violates the constraint. The vector represents a null knapsack.
In last two decades, evolutionary algorithms, especially genetic algorithms (GAs), have been well adopted for tackling the knapsack problem [2, 3]. In order to assess the quality of solutions in EAs, we follow the classical approximation algorithm (see [13] for a detailed exposition) and define an evolutionary approximation algorithm as follows.
Definition 1
We say that an EA is an approximation algorithm for an optimization problem if for all instances of the problem, the EA can produce a solution within a polynomial runtime, the value of which is within a factor of of the value of an optimal solution, regardless of the initialization. Here the runtime is measured by the expected number of function evaluations.
For instance, in case of the 01 knapsack problem, an evolutionary approximation algorithm always can find a solution the value of which is at least a half of the optimal value within a polynomial runtime.
Iii Pure Strategy (N+1) Evolutionary Algorithms
In this section we analyze pure strategy EAs for the 01 knapsack problem. Here a pure strategy EA refers to an EA that employs a single repair method. The genetic operators used in EAs are bitwise mutation and truncation selection.

Truncation Selection: Select the best N individuals from the parent population and the child.
A number of diverse methods are available to handle constraints in EAs [6, 17]. Empirical results indicate that repair methods are more efficient than penalty function methods for the knapsack problem [18]. Thus, only the repair methods are investigated in the current paper. The repair procedure [6] is explained as follows.
There are several select methods available for the repair procedure, such as the profitgreedy repair, the ratiogreedy repair and the random repair methods.

Profitgreedy repair: sort the items according to the decreasing order of their corresponding profits . Then select the item with the smallest profit and remove it from the knapsack.

Ratiogreedy repair: sort the items according to the decreasing order of the corresponding ratios . Then select the item with the smallest ratio and remove it from the knapsack.

Random repair: select an item from the knapsack at random and remove it from the knapsack.
Thanks to the repair method, all of the infeasible solutions have been repaired into the feasible ones. The fitness function of a feasible solution is
First, let’s consider a pure strategy EA using ratiogreedy repair for solving the 01 knapsack problem, which is described as follows.
The following proposition reveals that the EA using the ratiogreedy repair cannot produce a good solution to the 01 knapsack problem within a polynomial runtime.
Proposition 1
For any constant , the EA using RatioGreedy Repair is not an approximation algorithm for the 01 knapsack problem.
Proof:
According to definition 1, it suffices to consider the following instance of the 01 knapsack problem:
Item  

Profit  
Weight  
Capacity 
where without loss of generality, suppose is a large positive integer for a sufficiently large .
The global optimum for the instance described above is
A local optimum is
The ratio of fitness between the local optimum and the global optimum is
Suppose that the EA starts at the above local optimum having the 2nd highest fitness. Truncation selection combined with the ratiogreedy repair prevents a mutant solution from entering into the next generation unless the mutant individual is the global optimum itself. Thus, it arrives at the global optimum only if onevalued bits are flipped into zerovalued ones and the bit is flipped from to ; other zerovalued bits remain unchanged. The probability of this event happening is
Thus, we now deduce that the expected runtime is , that is exponential in . This completes the argument.
Let the constant towards , proposition 1 tells us that the solution produced by the EA using the ratiogreedy repair after a polynomial runtime may be arbitrarily bad.
Next, we consider another pure strategy EA that uses the randomgreedy repair to tackle the 01 knapsack problem, which is described as follows.
Similarly, we may prove that this EA cannot produce a good solution to the 01 knapsack problem within a polynomial runtime using the same instance as that in Proposition 1.
Proposition 2
For any constant , the EA using Random Repair is not an approximation algorithm for the 01 knapsack problem.
Proposition 2 tells us that the solution produced by the EA using random repair is arbitrary bad.
Finally we investigate a pure strategy EA using profitgreedy repair for solving the 01 knapsack problem, which is described as follows.
Proposition 3
For any constant , the EA using profitgreedy repair is not an approximation algorithm for the 01 knapsack problem.
Proof:
Let’s consider the following instance:
Item  

Profit  
Weight  
Capacity 
where without loss of generality, suppose is a large positive integer for a sufficiently large .
The local optimum is
and the global optimum is
The fitness ratio between the local optimum and the global optimum is
Suppose that the EA starts at the local optimum . Let’s investigate the following mutually exclusive and exhaustive events:

An infeasible solution has been generated. In this case the infeasible solution will be repaired back to by profitgreedy repair.

A feasible solution having the fitness smaller than has been generated. In this case, truncation selection will prevent the new feasible solution from being accepted.

A feasible solution is generated having fitness not smaller than . This is the only way in which truncation selection will preserve the new mutant solution. Nonetheless, this event happens only if the first bit of the individual in the initial population, , is flipped from into while at least zerovalued bits of this individual, are flipped from into . The probability of this event is
It follows immediately that if the EA starts at the local optimum , the expected runtime to produce a better solution is exponential in . The desired conclusion now follows immediately from definition 1.
Proposition 3 tells us that a solution produced by the EA using profitgreedy repair may be arbitrarily bad as well.
In summary, we have demonstrated that none of the three pure strategy EAs is an approximation algorithm for the 01 knapsack problem given any constant
Iv Mixed Strategy (N+1) Evolutionary Algorithm
In this section we analyse mixed strategy evolutionary algorithm which combines several repair methods together. Here a mixed strategy EA
refers to an EA employing two or more repairing methods selected with respect to a probability distribution over the set of repairing methods. It may be worth noting that other types of mixed strategy EAs have been considered in the literature. For example,the mixed strategy EA in
[19] employs four mutation operators. Naturally, we want to know whether or not a mixed strategy (1+1) EA, combining two or more repair methods together, may produce an approximation solution with a guarantee to the 01 knapsack problem.A mixed strategy (1+1) EA for solving the 01 knapsack problem is described as follows. The EA combines both, ratiogreedy and profitgreedy repair methods together.
Unfortunately the quality of solutions in the mixed strategy EA still has no guarantee.
Proposition 4
Given any constant , the mixed strategy EA using ratiogreedy repair and profitgreedy Repair is not an approximation algorithm for the 01 knapsack problem.
Proof:
consider the same instance as that in the proof of Proposition 3:
Item  

Profit  
Weight  
Capacity 
where the local optimum is and . The global optimum is and .
The fitness ratio between the local optimum and the global optimum is
Suppose the EA starts at the local optimum . Let’s analyse the following mutually exclusive and exhaustive events that occur upon completion of mutation:

A feasible solution is generated the fitness of which is smaller than . In this case, truncation selection will prevent the new feasible solution from entering the next generation.

A feasible solution is generated the fitness of which is not smaller than . The truncation selection may allow the new feasible solution to enter the next generation. This event happens only if the first bit is flipped from to and at least zerovalued bits are flipped into onevalued. The probability of the event is then is

An infeasible solution is generated, but fewer than zerovalued bits are flipped into the onevalued bits. In this case, either the infeasible solution will be repaired into through the profitgreedy repair; or, it is repaired into a feasible solution where and fewer than onevalued bits among the rest of the bits through the ratiogreedy repair. In the later case the fitness of the new feasible solution is smaller than and, therefore, cannot be accepted by the truncation selection.

An infeasible solution is generated but no fewer than zerovalued bits are flipped into the onevalued bits. This event happens only if at least zerovalued bits are flipped into the onevalued bits. The probability of the event is then is
Afterwards, with a positive probability, it is repaired into a feasible solution where and fewer than onevalued bits among the rest of the bits by the ratiogreedy repair. In the later case the fitness of the new feasible solution is smaller than and, therefore, it is prevented from entering the next generation by the truncation selection.
Summarizing the four cases described above, we see that when the EA starts at the local optimum , it is possible to generate a better solution with probability is
We then know that the expected runtime to produce a better solution is exponential in . The conclusion of proposition 4 now follows at once.
Proposition 4 above tells us that solutions produced by the mixed strategy (M+1) EA exploiting the ratiogreedy repair and profitgreedy repair may be arbitrarily bad.
Furthermore, we can prove, that even the mixed strategy EA combining the ratiogreedy repair, profitgreedy repair and randomrepair together, is not an approximation algorithm for the 01 knapsack problem. Its proof is practically identical to that of Proposition 4.
In summary, we have demonstrated that mixed strategy EAs are approximation algorithms for the 01 knapsack problem given any constant
V MultiObjective Evolutionary Algorithm
So far, we have established several negative results about EAs for the 01 knapsack problem. A naturally arising important question is then how we can construct an evolutionary approximation algorithm. The most straightforward approach is to apply an approximation algorithm first to produce a good solution, and, afterwards, to run an EA to seek the global optimum solution. Nonetheless, such EAs sometimes get trapped into the absorbing area of a local optimum, so it is less efficient in seeking the global optimum.
Here we analyse a multiobjective EA using helper objectives (denoted by MOEA in short), which is similar to the EA presented in [16], but small changes are made in helper objectives for the sake of analysis. Experiment results in [16] have shown that the MOEA using helper objectives performs better than the simple combination of an approximation algorithm and a GA.
The MOEA is designed using the multiobjectivization technique. In multiobjectivization, singleobjective optimisation problems are transferred into multiobjective optimisation problems by decomposing the original objective into several components [20] or by adding helper objectives [21]. Multiobjectivization may bring both positive and negative effects [22, 23, 24]. This approach has been used for solving several combinatorial optimisation problems, for example, the knapsack problem [15], vertex cover problem [25] and minimum label spanning tree problem [26].
Now we describe the MOEA using helper objectives, similar to the EA in [16]. The original single objective optimization problem (1) is recast into a multiobjective optimization problem using three helper objectives. First let’s look at the following instance.
Item  1  2  3  4  5 

Profit  10  10  10  12  12 
Weight  10  10  10  10  10 
Capacity  20 
The global optimum is in this instance. In the optimal solution, the average profit of packed items is the largest. Thus the first helper objective is to maximize the average profit of items in a knapsack. We don’t use the original value of profits, instead we use the ranking value of profits. Assume that the profit of item is the th smallest, then let the ranking value . For example in the above instance, and . Then the helper objective function is defined to be
(2) 
where .
Next we consider another instance.
Item  1  2  3  4  5 

Profit  15  15  20  20  20 
Weight  10  10  20  20  20 
Capacity  20 
The global optimum is in this instance. In the optimal solution, the average profittoweight ratio of packed items is the largest. However, the average profit of these items is not the largest. Then the second helper objective is to maximize the average profittoweight ratio of items in a knapsack. We don’t use the original value of profittoweight, instead its ranking value. Assume that the profittoweight of item is the th smallest, then let the ranking value . For example in the above instance, and . Then the helper objective function is defined to be
(3) 
Finally let’s see the following instance.
Item  1  2  3  4  5 

Profit  40  40  40  40  150 
Weight  30  30  30  30  100 
Capacity  120 
The global optimum is in this instance. In the optimal solution, neither the average profit of packed items nor average profittoweight ratio is the largest. Instead the number of packed items is the largest, or the average weight is the smallest. Thus the third helper objectives are to maximize the number of items in a knapsack. The objective functions are
(4) 
We then consider a multiobjective optimization problem:
(5) 
The multiobjective optimisation problem (5) is solved by an EA using bitwise mutation, and multicriteria truncation selection, plus a mixed strategy of two repair methods.
A novel multicriteria truncation selection operator is adopted in the above EA. Since the target is to maximise several objectives simultaneously, we select a few individuals which have higher function values with respect to each objective function. The pseudocode of multicriteria selection is described as follows.
In the above algorithm, Steps 34 are for selecting the individuals with higher values of . In order to preserve diversity, we choose these individuals which have different values of or . Similarly Steps 910 are for selecting the individuals with a higher value of . We choose the individuals which have different values of for maintaining diversity. Steps 1516 are for selecting individuals with a higher value of . Again we choose these individuals which have different values of for preserving diversity. We don’t explicitly select individuals based on . Instead we implicitly do it during Steps 910, and Steps 1516.
Using helper objectives and multicriterion truncation selection brings a benefit of searching along several directions , and implicitly . Hence the MOEA may arrive at a local optimum quickly, but at the same time, does not get trapped into the absorbing area of a local optimum of . The experiment results [16] have demonstrate the MOEA using helper objectives outperform the simplified combination of an approximation algorithm and a GA.
The analysis is based on a fact which is derived from the analysis of the greedy algorithm for the 01 knapsack problem (see [1, Section 2.4])). Consider the following algorithm:
Then the fitness of or is not smaller than 1/2 of the fitness of the optimal solution.
Based on the above fact, we can prove the following result.
Theorem 1
If , then the MOEA can produce a feasible solution, which is not worse than and , within runtime.
Proof:
(1) Without loss of generality, let the first item be the most profitable one. First, it suffices to prove that the EA can generate a feasible solution fitting the Holland schema (as usual, stands for the ‘don’t care’ symbol that could be replaced either by a or a ) within a polynomial runtime.
Suppose that the value of of all the individuals in the population are smaller than that of , that is, they fit the Holland schema . Let be the individual that is chosen for mutation. Through mutation, can be flipped from to with probability . If the child is feasible, then we arrive at the desired individual (denote it by ). If the child is infeasible, then, with probability , the first item will be kept thanks to the profitgreedy repair and a feasible solution is generated (denote it by ). We have now shown that the EA can generate a feasible solution that includes the most profitable item with probability at least .
Thus, the EA can generate a feasible solution fitting the Holland schema within the expected runtime is at most .
(2) Without loss of generality, let
and let . We now demonstrate that the EA can reach within a polynomial runtime via objectives and .
First we prove that the EA can reach within a polynomial runtime. We exploit drift analysis [27] as a tool to establish the result. For a binary vector , define the distance function
(6) 
For a population , its distance function is
According to the definition of , the above distance function is upperbounded by .
Suppose that none of individuals in the current population is . Let be the individual, the value of whose distance is the smallest in the current population. The individual belongs to one of the two cases below:
 Case 1

: fits the Holland schema where at least one * bit takes the value of 1.
 Case 2

: fits the Holland schema .
The individual will be chosen for mutation with probability . Now we analyse the mutation event related to the above two cases.
Analysis of Case 1: one of 1valued *bits (but not the first bit) is flipped into 0valued; other bits are not changed. This event will happen with a probability
(7) 
Let’s establish how the value of increases during the mutation. Denote the 1valued bits in by . Then the objective ’s value is
Without loss of generality, the th bit is flipped into 0valued. Then after mutation, the 1valued bits in becomes and the objective ’s value is
Thus, the value of increases (or equivalently, the value of decreases) by
(8) 
Thanks to the multicriteria truncation selection, the value of always increases. So there is no negative drift. Therefore the drift in Case 1 is
(9) 
Analysis of Case 2: The first bit is flipped into 0valued; other bits are not changed. The analysis then is identical to Case 1. The drift in Case 2 is , the same as that in Case 1.
Recall that the distance function . Applying the drift theorem [27, Theorem 1], we deduce that the expected runtime to reach is . Once is included in the population, it will be kept for ever according to the multicriteria truncation selection.
Next we prove that the EA can reach within a polynomial runtime when starting from . Suppose that the current population includes an individual but no individual . The individual may be chosen for mutation with a probability , then it can be mutated into with a probability . The individual has the second largest value of , thus, according to the multicriteria truncation selection, it will be kept in the next generation population. Hence the expected runtime for the EA to reach the individual is . Similarly we can prove that the EA will reach within runtime, then within runtime, and so on. The expected runtime for the EA to reach is .
Combining the above discussions together, we see that the expected runtime to produce a solution not worse than and is .
If we change helper objective functions and to those used in [16],
(10)  
(11) 
then the above proof doesn’t work, and we need a new proof for obtaining the same conclusion. Furthermore, it should be mentioned that none of the three objectives can be removed; otherwise the MOEA will not produce a solution with a guaranteed approximation ratio. But on the other side, the performance might be better if adding more objectives, for example,
(12) 
Vi Conclusions
In this work, we have assessed the solution quality in three types of EAs, which exploit bitwise mutation and truncation selection, for solving the knapsack problem. We have proven that the pure strategy EAs using a single repair method and the mixed strategy EA combing two repair methods are not a approximation algorithm for any constant . In other words, solution quality in these EAs may be arbitrarily bad. Nevertheless, we have shown that a multiobjective EA using helper objectives is a 1/2approximation algorithm. Its runtime is . Our work demonstrates that using helper objectives is a good approach to design evolutionary approximation algorithms. The advantages of the EA using helper objectives is to search along several directions and also to preserve population diversity.
Populationbased EAs using other strategies of preserving diversity, such as niching methods, are not investigated in this paper. The extension of this work to such EAs will be the future research. Another work in the future is to study the solution quality of MOEAs for the multidimension knapsack problem.
Acknowledgements
This work was supported by the EPSRC under Grant No. EP/I009809/1 and by the NSFC under Grant No. 61170081.
References
 [1] S. Martello and P. Toth, Knapsack Problems. Chichester: John Wiley & Sons, 1990.
 [2] Z. Michalewicz and J. Arabas, “Genetic algorithms for the 0/1 knapsack problem,” in Methodologies for Intelligent Systems. Springer, 1994, pp. 134–143.
 [3] S. Khuri, T. Bäck, and J. Heitkötter, “The zero/one multiple knapsack problem and genetic algorithms,” in Proceedings of the 1994 ACM Symposium on Applied Computing. ACM, 1994, pp. 188–193.

[4]
P. C. Chu and J. E. Beasley, “A genetic algorithm for the multidimensional
knapsack problem,”
Journal of Heuristics
, vol. 4, no. 1, pp. 63–86, 1998.  [5] G. R. Raidl, “An improved genetic algorithm for the multiconstrained 01 knapsack problem,” in Proceedings of the 1998 IEEE World Congress on Computational Intelligence. IEEE, 1998, pp. 207–211.
 [6] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 3rd ed. New York: Springer Verlag, 1996.
 [7] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
 [8] A. Jaszkiewicz, “On the performance of multipleobjective genetic local search on the 0/1 knapsack problema comparative experiment,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 4, pp. 402–412, 2002.
 [9] M. Eugénia Captivo, J. Clìmaco, J. Figueira, E. Martins, and J. Luis Santos, “Solving bicriteria 0–1 knapsack problems using a labeling algorithm,” Computers & Operations Research, vol. 30, no. 12, pp. 1865–1886, 2003.
 [10] E. Özcan and C. Başaran, “A case study of memetic algorithms for constraint optimization,” Soft Computing, vol. 13, no. 8, pp. 871–882, 2009.
 [11] R. Kumar and P. K. Singh, “Assessing solution quality of biobjective 01 knapsack problem using evolutionary and heuristic algorithms,” Applied Soft Computing, vol. 10, no. 3, pp. 711–718, 2010.
 [12] P. Rohlfshagen and J. A. Bullinaria, “Nature inspired genetic algorithms for hard packing problems,” Annals of Operations Research, vol. 179, no. 1, pp. 393–419, 2010.
 [13] D. P. Williamson and D. B. Shmoys, The Design of Approximation Algorithms. Cambridge University Press, 2011.
 [14] O. H. Ibarra and C. E. Kim, “Fast approximation algorithms for the knapsack and sum of subset problems,” Journal of the ACM, vol. 22, no. 4, pp. 463–468, 1975.
 [15] R. Kumar and N. Banerjee, “Analysis of a multiobjective evolutionary algorithm on the 0–1 knapsack problem,” Theoretical Computer Science, vol. 358, no. 1, pp. 104–120, 2006.
 [16] J. He, F. He, and H. Dong, “A novel genetic algorithm using helper objectives for the 01 knapsack problem,” arXiv, vol. 1404.0868, 2014.
 [17] C. Coello and A. Carlos, “Theoretical and numerical constrainthandling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 1112, pp. 1245–1287, 2002.
 [18] J. He and Y. Zhou, “A comparison of GAs using penalizing infeasible solutions and repairing infeasible solutions II,” in Proceedings of the 2nd International Symposium on Intelligence Computation and Applications. Wuhan, China: Springer, 2007, pp. 102–110.
 [19] J. He, W. Hou, H. Dong, and F. He, “Mixed strategy may outperform pure strategy: An initial study,” in Proceedings of 2013 IEEE Congress on Evolutionary Computation. IEEE, 2013, pp. 562–569.
 [20] J. D. Knowles, R. A. Watson, and D. W. Corne, “Reducing local optima in singleobjective problems by multiobjectivization,” in Evolutionary MultiCriterion Optimization. Springer, 2001, pp. 269–283.
 [21] M. T. Jensen, “Helperobjectives: Using multiobjective evolutionary algorithms for singleobjective optimisation,” Journal of Mathematical Modelling and Algorithms, vol. 3, no. 4, pp. 323–347, 2005.
 [22] J. Handl, S. C. Lovell, and J. Knowles, “Multiobjectivization by decomposition of scalar cost functions,” in Parallel Problem Solving from Nature–PPSN X. Springer, 2008, pp. 31–40.
 [23] D. Brockhoff, T. Friedrich, N. Hebbinghaus, C. Klein, F. Neumann, and E. Zitzler, “On the effects of adding objectives to plateau functions,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 591–603, 2009.
 [24] D. F. Lochtefeld and F. W. Ciarallo, “Helperobjective optimization strategies for the jobshop scheduling problem,” Applied Soft Computing, vol. 11, no. 6, pp. 4161–4174, 2011.
 [25] T. Friedrich, J. He, N. Hebbinghaus, F. Neumann, and C. Witt, “Approximating covering problems by randomized search heuristics using multiobjective models,” Evolutionary Computation, vol. 18, no. 4, pp. 617–633, 2010.
 [26] X. Lai, Y. Zhou, J. He, and J. Zhang, “Performance analysis of evolutionary algorithms for the minimum label spanning tree problem,” IEEE Transactions on Evolutionary Computation, 2014, (accpeted, online).
 [27] J. He and X. Yao, “Drift analysis and average time complexity of evolutionary algorithms,” Artificial Intelligence, vol. 127, no. 1, pp. 57–85, 2001.
Comments
There are no comments yet.