A Theoretical Assessment of Solution Quality in Evolutionary Algorithms for the Knapsack Problem

04/14/2014
by   Jun He, et al.
0

Evolutionary algorithms are well suited for solving the knapsack problem. Some empirical studies claim that evolutionary algorithms can produce good solutions to the 0-1 knapsack problem. Nonetheless, few rigorous investigations address the quality of solutions that evolutionary algorithms may produce for the knapsack problem. The current paper focuses on a theoretical investigation of three types of (N+1) evolutionary algorithms that exploit bitwise mutation, truncation selection, plus different repair methods for the 0-1 knapsack problem. It assesses the solution quality in terms of the approximation ratio. Our work indicates that the solution produced by pure strategy and mixed strategy evolutionary algorithms is arbitrarily bad. Nevertheless, the evolutionary algorithm using helper objectives may produce 1/2-approximation solutions to the 0-1 knapsack problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/11/2015

An Analytic Expression of Relative Approximation Error for a Class of Evolutionary Algorithms

An important question in evolutionary computation is how good solutions ...
05/31/2011

Cloud-based Evolutionary Algorithms: An algorithmic study

After a proof of concept using Dropbox(tm), a free storage and synchroni...
02/18/2014

Artificial Mutation inspired Hyper-heuristic for Runtime Usage of Multi-objective Algorithms

In the last years, multi-objective evolutionary algorithms (MOEA) have b...
06/22/2020

First Steps Towards a Runtime Analysis When Starting With a Good Solution

The mathematical runtime analysis of evolutionary algorithms traditional...
12/17/2014

Representation of Evolutionary Algorithms in FPGA Cluster for Project of Large-Scale Networks

Many problems are related to network projects, such as electric distribu...
10/26/2018

A Theoretical Framework of Approximation Error Analysis of Evolutionary Algorithms

In the empirical study of evolutionary algorithms, the solution quality ...
06/28/2012

Piecewise Linear Topology, Evolutionary Algorithms, and Optimization Problems

Schemata theory, Markov chains, and statistical mechanics have been used...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The knapsack problem is an NP-hard combinatorial optimisation problem [1]

, which includes a variety of knapsack-type problems such as the 0-1 knapsack problem and multi-dimensional knapsack problem. In the last two decades, evolutionary algorithms (EAs), especially genetic algorithms (GAs), have been well-adopted for tackling the knapsack problem

[2, 3, 4, 5]

. The problem has received a particular interest from the evolutionary computation community for the following two reasons. The first reason is that the binary vector representation of the candidate solutions is a natural encoding of the 0-1 knapsack problem’s search space. Thereby, it provides an ideal setting for the applications of genetic algorithms 

[6]. On the other hand, the multi-dimensional knapsack problem is a natural multi-objective optimization problem, so that it is often taken as a test problem for studying multi-objective optimization evolutionary algorithms (MOEAs) [7, 8, 9, 10, 11].

A number of empirical results in the literature (see, for instance, [7, 8, 9, 10, 11, 12]) assert that EAs can produce “good” solutions to the knapsack problem. A naturally arising question is then how to measure the “goodness” of solutions that EAs may produce? To address the question, the most popular approach is to compare the quality of the solutions generated by EAs via computer experiments. For example, the solution quality of an EA is measured by the best solution found within 500 generations [6]. Such a comparison may help to compare performance of different EAs, yet it seldom provides any information regarding the proximity of the solutions produced by the EAs to the optimum.

From the viewpoint of algorithm analysis, it is important to assess how “good” a solution is in terms of the notion of approximation ratio (see [13]). There are several effective approximation algorithms for solving the knapsack problem [1]. For example, a fully polynomial time approximation scheme for the 0-1 knapsack problem has been presented in [14]. Nonetheless, very few rigorous investigations addressing the approximation ratio of EAs on the 0-1 knapsack problem exist. [15] recast the 0-1 knapsack problem into a bi-objective knapsack problem with two conflicting objectives (maximizing profits and minimizing weights). A (1+)-approximate set of the knapsack problem has been introduced for the bi-objective optimization problem. An MOEA, called Restricted Evolutionary Multiobjective Optimizer, has been designed to obtain the (1+)-approximate set. A pioneering contribution of [15] is a rigorous runtime analysis of the proposed MOEA.

The current paper focuses on investigating the approximation ratio of three types of EAs combining bitwise mutation, truncation section and diverse repair mechanisms for the 0-1 knapsack problem. The first type is several pure strategy EAs, where a single repair method is exploited in the EAs. The second type is several mixed strategy EAs, which choose a repair method from a repair method pool randomly. The third type is a multi-objective EA using helper objectives, which is a simplified version of the EA in [16].

The remainder of the paper is organized as follows. The 0-1 knapsack problem is introduced in section II. In section III we analyse pure strategy EAs, while in section IV we analyse mixed strategy EAs. Section V is devoted to analysing an MOEA using helper objectives. Section VI concludes the article.

Ii Knapsack Problem and Approximation Solution

The 0-1 knapsack problem is the most important knapsack problem and one of the most intensively studied combinatorial optimisation problems [1]. Given an instance of the 0-1 knapsack problem with a set of weights , profits , and capacity of a knapsack, the task is to find a binary vector so as to

(1)

where if the item is selected in the knapsack and if the item is not selected in the knapsack. A feasible solution is a knapsack represented by a binary vector which satisfies the constraint. An infeasible one is an that violates the constraint. The vector represents a null knapsack.

In last two decades, evolutionary algorithms, especially genetic algorithms (GAs), have been well adopted for tackling the knapsack problem [2, 3]. In order to assess the quality of solutions in EAs, we follow the classical -approximation algorithm (see [13] for a detailed exposition) and define an evolutionary approximation algorithm as follows.

Definition 1

We say that an EA is an -approximation algorithm for an optimization problem if for all instances of the problem, the EA can produce a solution within a polynomial runtime, the value of which is within a factor of of the value of an optimal solution, regardless of the initialization. Here the runtime is measured by the expected number of function evaluations.

For instance, in case of the 0-1 knapsack problem, an evolutionary -approximation algorithm always can find a solution the value of which is at least a half of the optimal value within a polynomial runtime.

Iii Pure Strategy (N+1) Evolutionary Algorithms

In this section we analyze pure strategy EAs for the 0-1 knapsack problem. Here a pure strategy EA refers to an EA that employs a single repair method. The genetic operators used in EAs are bitwise mutation and truncation selection.

  • Bitwise Mutation:

    Flip each bit with probability

    .

  • Truncation Selection: Select the best N individuals from the parent population and the child.

A number of diverse methods are available to handle constraints in EAs [6, 17]. Empirical results indicate that repair methods are more efficient than penalty function methods for the knapsack problem [18]. Thus, only the repair methods are investigated in the current paper. The repair procedure [6] is explained as follows.

1:  input ;
2:  if  then
3:      is infeasible;
4:     while ( is infeasible) do
5:         select an item from the knapsack;
6:        set ;
7:        if  then
8:            is feasible;
9:        end if
10:     end while
11:  end if
12:  output .

There are several select methods available for the repair procedure, such as the profit-greedy repair, the ratio-greedy repair and the random repair methods.

  1. Profit-greedy repair: sort the items according to the decreasing order of their corresponding profits . Then select the item with the smallest profit and remove it from the knapsack.

  2. Ratio-greedy repair: sort the items according to the decreasing order of the corresponding ratios . Then select the item with the smallest ratio and remove it from the knapsack.

  3. Random repair: select an item from the knapsack at random and remove it from the knapsack.

Thanks to the repair method, all of the infeasible solutions have been repaired into the feasible ones. The fitness function of a feasible solution is

First, let’s consider a pure strategy EA using ratio-greedy repair for solving the 0-1 knapsack problem, which is described as follows.

1:  input an instance of the 0-1 knapsack problem;
2:  initialize a population considering of N individuals;
3:  for  do
4:     mutate one individual and generate a child;
5:     if the child is an infeasible solution then
6:        repair it into a feasible solution using the ratio-greedy repair;
7:     end if
8:     select N individuals from the parent population and the child using truncation selection;
9:  end for
10:  output the maximum of the fitness function.

The following proposition reveals that the EA using the ratio-greedy repair cannot produce a good solution to the 0-1 knapsack problem within a polynomial runtime.

Proposition 1

For any constant , the EA using Ratio-Greedy Repair is not an -approximation algorithm for the 0-1 knapsack problem.

Proof:

According to definition 1, it suffices to consider the following instance of the 0-1 knapsack problem:

Item
Profit
Weight
Capacity

where without loss of generality, suppose is a large positive integer for a sufficiently large .

The global optimum for the instance described above is

A local optimum is

The ratio of fitness between the local optimum and the global optimum is

Suppose that the EA starts at the above local optimum having the 2nd highest fitness. Truncation selection combined with the ratio-greedy repair prevents a mutant solution from entering into the next generation unless the mutant individual is the global optimum itself. Thus, it arrives at the global optimum only if one-valued bits are flipped into zero-valued ones and the bit is flipped from to ; other zero-valued bits remain unchanged. The probability of this event happening is

Thus, we now deduce that the expected runtime is , that is exponential in . This completes the argument.

Let the constant towards , proposition 1 tells us that the solution produced by the EA using the ratio-greedy repair after a polynomial runtime may be arbitrarily bad.

Next, we consider another pure strategy EA that uses the random-greedy repair to tackle the 0-1 knapsack problem, which is described as follows.

1:  input an instance of the 0-1 knapsack problem;
2:  initialize a population considering of N individuals;
3:  for  do
4:     mutate one individual and generate a child;
5:     if the child is an infeasible solution then
6:        repair it into a feasible solution using the random-greedy repair;
7:     end if
8:     select N individuals from the parent population and the child using truncation selection;
9:  end for
10:  output the maximum of the fitness function.

Similarly, we may prove that this EA cannot produce a good solution to the 0-1 knapsack problem within a polynomial runtime using the same instance as that in Proposition 1.

Proposition 2

For any constant , the EA using Random Repair is not an -approximation algorithm for the 0-1 knapsack problem.

Proposition 2 tells us that the solution produced by the EA using random repair is arbitrary bad.

Finally we investigate a pure strategy EA using profit-greedy repair for solving the 0-1 knapsack problem, which is described as follows.

1:  input an instance of the 0-1 knapsack problem;
2:  initialize a population considering of N individuals;
3:  for  do
4:     mutate one individual and generate a child;
5:     if the child is an infeasible solution then
6:        repair it into a feasible solution using the profit-greedy repair;
7:     end if
8:     select N individuals from the parent population and the child using truncation selection;
9:  end for
10:  output the maximum of the fitness function.
Proposition 3

For any constant , the EA using profit-greedy repair is not an -approximation algorithm for the 0-1 knapsack problem.

Proof:

Let’s consider the following instance:

Item
Profit
Weight
Capacity

where without loss of generality, suppose is a large positive integer for a sufficiently large .

The local optimum is

and the global optimum is

The fitness ratio between the local optimum and the global optimum is

Suppose that the EA starts at the local optimum . Let’s investigate the following mutually exclusive and exhaustive events:

  1. An infeasible solution has been generated. In this case the infeasible solution will be repaired back to by profit-greedy repair.

  2. A feasible solution having the fitness smaller than has been generated. In this case, truncation selection will prevent the new feasible solution from being accepted.

  3. A feasible solution is generated having fitness not smaller than . This is the only way in which truncation selection will preserve the new mutant solution. Nonetheless, this event happens only if the first bit of the individual in the initial population, , is flipped from into while at least zero-valued bits of this individual, are flipped from into . The probability of this event is

It follows immediately that if the EA starts at the local optimum , the expected runtime to produce a better solution is exponential in . The desired conclusion now follows immediately from definition 1.

Proposition 3 tells us that a solution produced by the EA using profit-greedy repair may be arbitrarily bad as well.

In summary, we have demonstrated that none of the three pure strategy EAs is an -approximation algorithm for the 0-1 knapsack problem given any constant

Iv Mixed Strategy (N+1) Evolutionary Algorithm

In this section we analyse mixed strategy evolutionary algorithm which combines several repair methods together. Here a mixed strategy EA

refers to an EA employing two or more repairing methods selected with respect to a probability distribution over the set of repairing methods. It may be worth noting that other types of mixed strategy EAs have been considered in the literature. For example,the mixed strategy EA in 

[19] employs four mutation operators. Naturally, we want to know whether or not a mixed strategy (1+1) EA, combining two or more repair methods together, may produce an approximation solution with a guarantee to the 0-1 knapsack problem.

A mixed strategy (1+1) EA for solving the 0-1 knapsack problem is described as follows. The EA combines both, ratio-greedy and profit-greedy repair methods together.

1:  input an instance of the 0-1 knapsack problem;
2:  initialize a population considering of N individuals;
3:  for  do
4:     mutate one individual and generate a child;
5:     if the child is an infeasible solution then
6:        select either ratio-greedy repair or profit-greedy repair method uniformly at random;
7:        repair it into a feasible solution;
8:     end if
9:     select N individuals from the parent population and the child using truncation selection;
10:  end for
11:  output the maximum of the fitness function.

Unfortunately the quality of solutions in the mixed strategy EA still has no guarantee.

Proposition 4

Given any constant , the mixed strategy EA using ratio-greedy repair and profit-greedy Repair is not an -approximation algorithm for the 0-1 knapsack problem.

Proof:

consider the same instance as that in the proof of Proposition 3:

Item
Profit
Weight
Capacity

where the local optimum is and . The global optimum is and .

The fitness ratio between the local optimum and the global optimum is

Suppose the EA starts at the local optimum . Let’s analyse the following mutually exclusive and exhaustive events that occur upon completion of mutation:

  1. A feasible solution is generated the fitness of which is smaller than . In this case, truncation selection will prevent the new feasible solution from entering the next generation.

  2. A feasible solution is generated the fitness of which is not smaller than . The truncation selection may allow the new feasible solution to enter the next generation. This event happens only if the first bit is flipped from to and at least zero-valued bits are flipped into one-valued. The probability of the event is then is

  3. An infeasible solution is generated, but fewer than zero-valued bits are flipped into the one-valued bits. In this case, either the infeasible solution will be repaired into through the profit-greedy repair; or, it is repaired into a feasible solution where and fewer than one-valued bits among the rest of the bits through the ratio-greedy repair. In the later case the fitness of the new feasible solution is smaller than and, therefore, cannot be accepted by the truncation selection.

  4. An infeasible solution is generated but no fewer than zero-valued bits are flipped into the one-valued bits. This event happens only if at least zero-valued bits are flipped into the one-valued bits. The probability of the event is then is

    Afterwards, with a positive probability, it is repaired into a feasible solution where and fewer than one-valued bits among the rest of the bits by the ratio-greedy repair. In the later case the fitness of the new feasible solution is smaller than and, therefore, it is prevented from entering the next generation by the truncation selection.

Summarizing the four cases described above, we see that when the EA starts at the local optimum , it is possible to generate a better solution with probability is

We then know that the expected runtime to produce a better solution is exponential in . The conclusion of proposition 4 now follows at once.

Proposition 4 above tells us that solutions produced by the mixed strategy (M+1) EA exploiting the ratio-greedy repair and profit-greedy repair may be arbitrarily bad.

Furthermore, we can prove, that even the mixed strategy EA combining the ratio-greedy repair, profit-greedy repair and random-repair together, is not an -approximation algorithm for the 0-1 knapsack problem. Its proof is practically identical to that of Proposition 4.

In summary, we have demonstrated that mixed strategy EAs are -approximation algorithms for the 0-1 knapsack problem given any constant

V Multi-Objective Evolutionary Algorithm

So far, we have established several negative results about EAs for the 0-1 knapsack problem. A naturally arising important question is then how we can construct an evolutionary approximation algorithm. The most straightforward approach is to apply an approximation algorithm first to produce a good solution, and, afterwards, to run an EA to seek the global optimum solution. Nonetheless, such EAs sometimes get trapped into the absorbing area of a local optimum, so it is less efficient in seeking the global optimum.

Here we analyse a multi-objective EA using helper objectives (denoted by MOEA in short), which is similar to the EA presented in [16], but small changes are made in helper objectives for the sake of analysis. Experiment results in [16] have shown that the MOEA using helper objectives performs better than the simple combination of an approximation algorithm and a GA.

The MOEA is designed using the multi-objectivization technique. In multi-objectivization, single-objective optimisation problems are transferred into multi-objective optimisation problems by decomposing the original objective into several components [20] or by adding helper objectives [21]. Multi-objectivization may bring both positive and negative effects [22, 23, 24]. This approach has been used for solving several combinatorial optimisation problems, for example, the knapsack problem [15], vertex cover problem [25] and minimum label spanning tree problem [26].

Now we describe the MOEA using helper objectives, similar to the EA in [16]. The original single objective optimization problem (1) is recast into a multi-objective optimization problem using three helper objectives. First let’s look at the following instance.

Item 1 2 3 4 5
Profit 10 10 10 12 12
Weight 10 10 10 10 10
Capacity 20

The global optimum is in this instance. In the optimal solution, the average profit of packed items is the largest. Thus the first helper objective is to maximize the average profit of items in a knapsack. We don’t use the original value of profits, instead we use the ranking value of profits. Assume that the profit of item is the th smallest, then let the ranking value . For example in the above instance, and . Then the helper objective function is defined to be

(2)

where .

Next we consider another instance.

Item 1 2 3 4 5
Profit 15 15 20 20 20
Weight 10 10 20 20 20
Capacity 20

The global optimum is in this instance. In the optimal solution, the average profit-to-weight ratio of packed items is the largest. However, the average profit of these items is not the largest. Then the second helper objective is to maximize the average profit-to-weight ratio of items in a knapsack. We don’t use the original value of profit-to-weight, instead its ranking value. Assume that the profit-to-weight of item is the th smallest, then let the ranking value . For example in the above instance, and . Then the helper objective function is defined to be

(3)

Finally let’s see the following instance.

Item 1 2 3 4 5
Profit 40 40 40 40 150
Weight 30 30 30 30 100
Capacity 120

The global optimum is in this instance. In the optimal solution, neither the average profit of packed items nor average profit-to-weight ratio is the largest. Instead the number of packed items is the largest, or the average weight is the smallest. Thus the third helper objectives are to maximize the number of items in a knapsack. The objective functions are

(4)

We then consider a multi-objective optimization problem:

(5)

The multi-objective optimisation problem (5) is solved by an EA using bitwise mutation, and multi-criteria truncation selection, plus a mixed strategy of two repair methods.

1:  input an instance of the 0-1 knapsack problem;
2:  initialize a population considering of N individuals;
3:  for  do
4:     mutate one individual and generate a child;
5:     if the child is an infeasible solution then
6:        select either ratio-greedy repair or profit-greedy repair method uniformly at random;
7:        repair it into a feasible solution;
8:     end if
9:     select N individuals from the parent population and the child using the multi-criterion truncation selection;
10:  end for
11:  output the maximum of the fitness function.

A novel multi-criteria truncation selection operator is adopted in the above EA. Since the target is to maximise several objectives simultaneously, we select a few individuals which have higher function values with respect to each objective function. The pseudo-code of multi-criteria selection is described as follows.

1:  input the parent population and the child;
2:  merge the parent population and the child into a temporary population which consists of individuals;
3:  sort all individuals in the temporary population in the descending order of , denote them by ;
4:  select all individuals from left to right (denote them by ) which satisfy or for any .
5:  if the number of selected individuals is greater than  then
6:     truncate them to individuals;
7:  end if
8:  add the selected individuals into the next generation population;
9:  resort all individuals in the temporary population in the descending order of , still denote them by ;
10:  select all individuals from left to right (still denote them by ) which satisfy for any .
11:  if  the number of selected individuals is greater than  then
12:     truncate them to individuals;
13:  end if
14:  add the selected individuals into the next generation population;
15:  resort all individuals in the temporary population in the descending order of , still denote them by ;
16:  select all individuals from left to right (still denote them by ) which satisfy for any .
17:  if  the number of selected individuals is greater than  then
18:     truncate them to individuals;
19:  end if
20:  add these selected individuals into the next generation population;
21:  while the next generation population size is less than  do
22:     randomly choose an individual from the parent population and child, and add it into the next generation population;
23:  end while
24:  output a new population .

In the above algorithm, Steps 3-4 are for selecting the individuals with higher values of . In order to preserve diversity, we choose these individuals which have different values of or . Similarly Steps 9-10 are for selecting the individuals with a higher value of . We choose the individuals which have different values of for maintaining diversity. Steps 15-16 are for selecting individuals with a higher value of . Again we choose these individuals which have different values of for preserving diversity. We don’t explicitly select individuals based on . Instead we implicitly do it during Steps 9-10, and Steps 15-16.

Using helper objectives and multi-criterion truncation selection brings a benefit of searching along several directions , and implicitly . Hence the MOEA may arrive at a local optimum quickly, but at the same time, does not get trapped into the absorbing area of a local optimum of . The experiment results [16] have demonstrate the MOEA using helper objectives outperform the simplified combination of an approximation algorithm and a GA.

The analysis is based on a fact which is derived from the analysis of the greedy algorithm for the 0-1 knapsack problem (see [1, Section 2.4])). Consider the following algorithm:

1:  let be the feasible solutions with the largest profit item;
2:  resort all the items via the ratio of their profits to their corresponding weights so that ;
3:  greedily add the items in the above order to the knapsack as long as adding an item to the knapsack does not exceeding the capacity of the knapsack. Denote the solution by .

Then the fitness of or is not smaller than 1/2 of the fitness of the optimal solution.

Based on the above fact, we can prove the following result.

Theorem 1

If , then the MOEA can produce a feasible solution, which is not worse than and , within runtime.

Proof:

(1) Without loss of generality, let the first item be the most profitable one. First, it suffices to prove that the EA can generate a feasible solution fitting the Holland schema (as usual, stands for the ‘don’t care’ symbol that could be replaced either by a or a ) within a polynomial runtime.

Suppose that the value of of all the individuals in the population are smaller than that of , that is, they fit the Holland schema . Let be the individual that is chosen for mutation. Through mutation, can be flipped from to with probability . If the child is feasible, then we arrive at the desired individual (denote it by ). If the child is infeasible, then, with probability , the first item will be kept thanks to the profit-greedy repair and a feasible solution is generated (denote it by ). We have now shown that the EA can generate a feasible solution that includes the most profitable item with probability at least .

Thus, the EA can generate a feasible solution fitting the Holland schema within the expected runtime is at most .

(2) Without loss of generality, let

and let . We now demonstrate that the EA can reach within a polynomial runtime via objectives and .

First we prove that the EA can reach within a polynomial runtime. We exploit drift analysis [27] as a tool to establish the result. For a binary vector , define the distance function

(6)

For a population , its distance function is

According to the definition of , the above distance function is upper-bounded by .

Suppose that none of individuals in the current population is . Let be the individual, the value of whose distance is the smallest in the current population. The individual belongs to one of the two cases below:

Case 1

: fits the Holland schema where at least one * bit takes the value of 1.

Case 2

: fits the Holland schema .

The individual will be chosen for mutation with probability . Now we analyse the mutation event related to the above two cases.

Analysis of Case 1: one of 1-valued *-bits (but not the first bit) is flipped into 0-valued; other bits are not changed. This event will happen with a probability

(7)

Let’s establish how the value of increases during the mutation. Denote the 1-valued bits in by . Then the objective ’s value is

Without loss of generality, the th bit is flipped into 0-valued. Then after mutation, the 1-valued bits in becomes and the objective ’s value is

Thus, the value of increases (or equivalently, the value of decreases) by

(8)

Thanks to the multi-criteria truncation selection, the value of always increases. So there is no negative drift. Therefore the drift in Case 1 is

(9)

Analysis of Case 2: The first bit is flipped into 0-valued; other bits are not changed. The analysis then is identical to Case 1. The drift in Case 2 is , the same as that in Case 1.

Recall that the distance function . Applying the drift theorem [27, Theorem 1], we deduce that the expected runtime to reach is . Once is included in the population, it will be kept for ever according to the multi-criteria truncation selection.

Next we prove that the EA can reach within a polynomial runtime when starting from . Suppose that the current population includes an individual but no individual . The individual may be chosen for mutation with a probability , then it can be mutated into with a probability . The individual has the second largest value of , thus, according to the multi-criteria truncation selection, it will be kept in the next generation population. Hence the expected runtime for the EA to reach the individual is . Similarly we can prove that the EA will reach within runtime, then within runtime, and so on. The expected runtime for the EA to reach is .

Combining the above discussions together, we see that the expected runtime to produce a solution not worse than and is .

If we change helper objective functions and to those used in [16],

(10)
(11)

then the above proof doesn’t work, and we need a new proof for obtaining the same conclusion. Furthermore, it should be mentioned that none of the three objectives can be removed; otherwise the MOEA will not produce a solution with a guaranteed approximation ratio. But on the other side, the performance might be better if adding more objectives, for example,

(12)

Vi Conclusions

In this work, we have assessed the solution quality in three types of EAs, which exploit bitwise mutation and truncation selection, for solving the knapsack problem. We have proven that the pure strategy EAs using a single repair method and the mixed strategy EA combing two repair methods are not a -approximation algorithm for any constant . In other words, solution quality in these EAs may be arbitrarily bad. Nevertheless, we have shown that a multi-objective EA using helper objectives is a 1/2-approximation algorithm. Its runtime is . Our work demonstrates that using helper objectives is a good approach to design evolutionary approximation algorithms. The advantages of the EA using helper objectives is to search along several directions and also to preserve population diversity.

Population-based EAs using other strategies of preserving diversity, such as niching methods, are not investigated in this paper. The extension of this work to such EAs will be the future research. Another work in the future is to study the solution quality of MOEAs for the multi-dimension knapsack problem.

Acknowledgements

This work was supported by the EPSRC under Grant No. EP/I009809/1 and by the NSFC under Grant No. 61170081.

References

  • [1] S. Martello and P. Toth, Knapsack Problems.   Chichester: John Wiley & Sons, 1990.
  • [2] Z. Michalewicz and J. Arabas, “Genetic algorithms for the 0/1 knapsack problem,” in Methodologies for Intelligent Systems.   Springer, 1994, pp. 134–143.
  • [3] S. Khuri, T. Bäck, and J. Heitkötter, “The zero/one multiple knapsack problem and genetic algorithms,” in Proceedings of the 1994 ACM Symposium on Applied Computing.   ACM, 1994, pp. 188–193.
  • [4] P. C. Chu and J. E. Beasley, “A genetic algorithm for the multidimensional knapsack problem,”

    Journal of Heuristics

    , vol. 4, no. 1, pp. 63–86, 1998.
  • [5] G. R. Raidl, “An improved genetic algorithm for the multiconstrained 0-1 knapsack problem,” in Proceedings of the 1998 IEEE World Congress on Computational Intelligence.   IEEE, 1998, pp. 207–211.
  • [6] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, 3rd ed.   New York: Springer Verlag, 1996.
  • [7] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999.
  • [8] A. Jaszkiewicz, “On the performance of multiple-objective genetic local search on the 0/1 knapsack problem-a comparative experiment,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 4, pp. 402–412, 2002.
  • [9] M. Eugénia Captivo, J. Clìmaco, J. Figueira, E. Martins, and J. Luis Santos, “Solving bicriteria 0–1 knapsack problems using a labeling algorithm,” Computers & Operations Research, vol. 30, no. 12, pp. 1865–1886, 2003.
  • [10] E. Özcan and C. Başaran, “A case study of memetic algorithms for constraint optimization,” Soft Computing, vol. 13, no. 8, pp. 871–882, 2009.
  • [11] R. Kumar and P. K. Singh, “Assessing solution quality of biobjective 0-1 knapsack problem using evolutionary and heuristic algorithms,” Applied Soft Computing, vol. 10, no. 3, pp. 711–718, 2010.
  • [12] P. Rohlfshagen and J. A. Bullinaria, “Nature inspired genetic algorithms for hard packing problems,” Annals of Operations Research, vol. 179, no. 1, pp. 393–419, 2010.
  • [13] D. P. Williamson and D. B. Shmoys, The Design of Approximation Algorithms.   Cambridge University Press, 2011.
  • [14] O. H. Ibarra and C. E. Kim, “Fast approximation algorithms for the knapsack and sum of subset problems,” Journal of the ACM, vol. 22, no. 4, pp. 463–468, 1975.
  • [15] R. Kumar and N. Banerjee, “Analysis of a multiobjective evolutionary algorithm on the 0–1 knapsack problem,” Theoretical Computer Science, vol. 358, no. 1, pp. 104–120, 2006.
  • [16] J. He, F. He, and H. Dong, “A novel genetic algorithm using helper objectives for the 0-1 knapsack problem,” arXiv, vol. 1404.0868, 2014.
  • [17] C. Coello and A. Carlos, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11-12, pp. 1245–1287, 2002.
  • [18] J. He and Y. Zhou, “A comparison of GAs using penalizing infeasible solutions and repairing infeasible solutions II,” in Proceedings of the 2nd International Symposium on Intelligence Computation and Applications.   Wuhan, China: Springer, 2007, pp. 102–110.
  • [19] J. He, W. Hou, H. Dong, and F. He, “Mixed strategy may outperform pure strategy: An initial study,” in Proceedings of 2013 IEEE Congress on Evolutionary Computation.   IEEE, 2013, pp. 562–569.
  • [20] J. D. Knowles, R. A. Watson, and D. W. Corne, “Reducing local optima in single-objective problems by multi-objectivization,” in Evolutionary Multi-Criterion Optimization.   Springer, 2001, pp. 269–283.
  • [21] M. T. Jensen, “Helper-objectives: Using multi-objective evolutionary algorithms for single-objective optimisation,” Journal of Mathematical Modelling and Algorithms, vol. 3, no. 4, pp. 323–347, 2005.
  • [22] J. Handl, S. C. Lovell, and J. Knowles, “Multiobjectivization by decomposition of scalar cost functions,” in Parallel Problem Solving from Nature–PPSN X.   Springer, 2008, pp. 31–40.
  • [23] D. Brockhoff, T. Friedrich, N. Hebbinghaus, C. Klein, F. Neumann, and E. Zitzler, “On the effects of adding objectives to plateau functions,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 591–603, 2009.
  • [24] D. F. Lochtefeld and F. W. Ciarallo, “Helper-objective optimization strategies for the job-shop scheduling problem,” Applied Soft Computing, vol. 11, no. 6, pp. 4161–4174, 2011.
  • [25] T. Friedrich, J. He, N. Hebbinghaus, F. Neumann, and C. Witt, “Approximating covering problems by randomized search heuristics using multi-objective models,” Evolutionary Computation, vol. 18, no. 4, pp. 617–633, 2010.
  • [26] X. Lai, Y. Zhou, J. He, and J. Zhang, “Performance analysis of evolutionary algorithms for the minimum label spanning tree problem,” IEEE Transactions on Evolutionary Computation, 2014, (accpeted, online).
  • [27] J. He and X. Yao, “Drift analysis and average time complexity of evolutionary algorithms,” Artificial Intelligence, vol. 127, no. 1, pp. 57–85, 2001.