A Weight-coded Evolutionary Algorithm for the Multidimensional Knapsack Problem

by   Quan Yuan, et al.
Wayne State University

A revised weight-coded evolutionary algorithm (RWCEA) is proposed for solving multidimensional knapsack problems. This RWCEA uses a new decoding method and incorporates a heuristic method in initialization. Computational results show that the RWCEA performs better than a weight-coded evolutionary algorithm proposed by Raidl (1999) and to some existing benchmarks, it can yield better results than the ones reported in the OR-library.


page 1

page 2

page 3

page 4


A Generalized Hybrid Real-Coded Quantum Evolutionary Algorithm Based on Particle Swarm Theory with Arithmetic Crossover

This paper proposes a generalized Hybrid Real-coded Quantum Evolutionary...

Multidimensional Constellations for Uplink SCMA Systems --- A Comparative Study

Sparse code multiple access (SCMA) is a class of non-orthogonal multiple...

Automatic Detection and Decoding of Photogrammetric Coded Targets

Close-range Photogrammetry is widely used in many industries because of ...

A binary differential evolution algorithm learning from explored solutions

Although real-coded differential evolution (DE) algorithms can perform w...

A Learning-Based Approach to Approximate Coded Computation

Lagrange coded computation (LCC) is essential to solving problems about ...

There is no fast lunch: an examination of the running speed of evolutionary algorithms in several languages

It is quite usual when an evolutionary algorithm tool or library uses a ...

Component-wise Analysis of Automatically Designed Multiobjective Algorithms on Constrained Problems

The performance of multiobjective algorithms varies across problems, mak...

1 Introduction

The multidimensional knapsack problem (MKP) can be stated as:

s.t. (1b)

Each of the constraints described in (1b) is called a knapsack constraint. A set of items with profits and resources with are given. Each item consumes an amount from each resource . The 0-1 decision variables indicate which items are selected. A well-stated MKP also assumes that and for all , , since any violation of these conditions will result in some constraints being eliminated or some ’s being fixed.

The MKP degenerates to the knapsack problem when in Eq. (1b). It is well known that the knapsack problem is not a strong -hard problem and solvable in pseudo-polynomial time. However, the situation is different to the general case of . Garey and Johnson (1979)[1] proved that it is strongly -hard and exact techniques are in practice only applicable to instances of small to moderate size.

Many practical problems such as the capital budgeting problem[2], allocating processors and databases in a distributed computer system[3], project selection and cargo loading [4], and cutting stock problems[5] can be formulated as an MKP. The MKP is also a subproblem of many general integer programs.

Given the practical and theoretical importance of the MKP, a large number of papers have devoted to the problem. It is not the place here to recall all of these papers. We refer to the papers of Chu and Beasley (1998)[6], Fréville (2004)[7] and the monograph of Kellerer (2004)[8]

for excellent overviews of theoretical analysis, exact methods, and heuristics of the MKP. Recently, some new algorithms for the MKP have been proposed such as some variants of the genetic algorithm

[9], the ant colony algorithm[10], the scatter search method[11], and some new heuristics[12, 13, 14, 15]. Some studies on analysis of the MKP[17, 16] and generalizations of the MKP[20, 18, 19] have also been put forward.

An Evolutionary algorithm (EA) is a generic population-based metaheuristic optimization algorithm. Candidate solutions to the optimization problem play the role of individuals (parents) in a population. Some mechanisms inspired by biological evolution: selection, crossover and mutation are used. The fitness function determines the environment within which the solutions “survive”. Then new groups of the population (children) are generated after the repeated application of the above operators.

In the last two decades EAs were studied for solving the MKP. Although the early works do not successfully show that genetic algorithms (GAs) were an effective tool for the MKP, the first successful GA’s implementation was proposed by Chu and Beasley (1998)[6]. Extended numerical comparisons with CPLEX (version 4.0) and other heuristic methods showed that Chu and Beasley’s GA has a robust behavior and can obtain high-quality solutions within a reasonable amount of computational time. Raidl and Gottlieb (2005)[17] introduced and compared six different EAs for the MKP, and performed static and dynamic analyses explaining the success or failure of these algorithms, respectively. They concluded that an EA based on direct representation, combined with local heuristic improvement (referred to as DIH in [17], i.e., GA of Chu and Beasley (1998)[6] with slight revision), can achieve better performance than other EAs mentioned in [17] from empirical analysis.

The best success for solving the MKP, as far as we known, has been obtained with tabu-search algorithms embedding effective preprocessing[21, 22]. Recently, impressive results have also been obtained by an implicit enumeration[23], a convergent algorithm[24], and an exact method based on a multi-level search strategy[25]. Compared with EAs, the methods mentioned above can yield better results when excellent solutions are required. But they are more complicated to implement or their computation takes extremely long time. Since EAs are simple to implement and their computation time are easy to control, they are good alternatives if the quality requirement of solutions of the MKP is not very strict.

In this paper, we will consider a variant of EA to solve the MKP. This EA will use a special encoding technique which is called weight-coding (or weight-biasing). We will improve a weight-coded EA (WCEA) proposed by Raidl (1999)[26] and propose an improved weight-coded EA (IWCEA). The numerical experiments of some benchmarks will show that the IWCEA performs better than the WCEA and can compete with DIH in some benchmarks. Moreover, in the same platform, IWCEA’s iterate time is shorter than other EAs listed in [17].

2 An Introduction to the weight-coding and its application to the MKP

When combinatorial optimization problems are solved by an EA, the coding of candidate solutions is a preliminary step. Direct coding such as the

binary coding

is an intuitive method. The main drawback of this coding lies in that many infeasible solutions may be generated by EA’s operators. To avoid that, the basic idea of the weight-coding is to represent a candidate solution by a vector of real-valued weights

. The phenotype that a weight vector represents is obtained by a two-step process.

  1. (biasing) The original problem is temporarily modified to by biasing problem parameters of according to the weights ;

  2. (decoding heuristic) A problem-specific decoding heuristic is used to generate a solution to . This solution is interpreted and evaluated for the original (unbiased) problem .

The weight-coding is an interesting approach because it can eliminate the necessity of an explicit repair algorithm, a penalization of infeasible solutions, or special crossover and mutation operators. It has already been successfully used for a variety of problems such as an optimum communications spanning tree problem[27], problem[28], the traveling salesman problem[29], and the multiple container packing problem[30].

To the best of the authors’ knowledge, the work of Raidl (1999)[26] is the first to use weight-coded EA (WCEA) to deal with the MKP. In that paper, some variants of WCEAs were proposed and compared. And Raidl finally suggested one of them and compared the WCEA with other EAs in [17]. In this WCEA, is set to be the weight vector representing a candidate solution. Weight is associated with item of the MKP. Corresponding to Step (a), the original MKP is biased by multiplying of profits in (1a) with log-normally distributed weights:



denotes a normally distributed random number with mean

and standard deviation

, and is a strategy parameter that controls the average intensity of biasing. Raidl (1999)[26] suggested that . Since the resource consumption values and resource limits are not modified, all feasible solutions of the biased MKP are feasible to (1).

Corresponding to Step (b), the decoding heuristic which Raidl (1999)[26] suggested is making use of the surrogate relaxation (See [32, 31]). The resource constraints (1b) are aggregated into a single constraint using surrogate multipliers , :



are obtained by solving the linear programming (LP) of the relaxed MKP, in which the variables

may get real values from . The values of the dual variables are then used as surrogate multipliers, i.e. is set to the shadow price of the -th constraint in the LP-relaxed MKP. Pseudo-utility ratios are defined as:


A higher pseudo-utility ratio heuristically indicates that an item is more efficient. After the items are sorted by decreasing order of , the first-fit strategy used as decoder in the permutation representation is applied. All items are checked one by one and each item’s variable is set to if no resource constraint is violated, otherwise, is set to . The computational effort of the decoder is for sorting the plus for the first-fit strategy, yielding in total.

Raidl’s WCEA can be described as follows (we will explain the details of Steps 6, 7, and 8 afterward):

Algorithm of Raidl’s WCEA

  1. set ;

  2. initialize , where

    is a random value following log-normally distribution as (


  3. evaluate ;
    for each

    1. bias original MKP;

    2. use decoding heuristic as in [26] (described above) to get phenotype ;

    3. substitute into (1a) to obtain ;

  4. find , ; do

  5. select from ;

  6. crossover and to generate a child ;

  7. mutate ;

  8. evaluate as Step 3, get and ;

  9. if any then (that means is a duplicate of a member of the population)

  10. discard and goto Step 6;
    end if

  11. find and replace ; (steady-state replacement, i.e., the worst individual of population is replaced.)

  12. if then

  13. ; (update best solution found)
    end if

  14. ;
    end while

  15. return , .

In Step 6, a binary tournament selection is used. That is, two pools of individuals, which consist of individuals drawn from the population randomly, are formed respectively at first. Then two individuals with the best fitness, each taken from one of the two tournament pools, are chosen to be parents.

In Step 7, Raidl (1999)[26] suggested a uniform crossover instead of one- or two-point crossover. In the uniform crossover two parents have one child. Each in the child is chosen randomly by copying the corresponding weight from one or the other parent.

Once a child has been generated through the crossover, a mutation step in Step 8 is performed. Each

of the child is reset to a new random value observing log-normal distribution with a small probability (

per weight as in [26] or one random position in [17]).

In numerical experiments, the in Step 2 is taken as and in Step 5 is taken . Raidl and Gottlieb (2005)[17] compared this WCEA with other five EAs for the MKP. From empirical analysis, this WCEA outperformed all of them except DIH (The meaning of DIH is given in Section 1) on average.

3 An Improved WCEA for the MKP

3.1 Motivation

The core of Raidl’s WCEA is the surrogate relaxation based heuristic in decoding. In our points of view, this heuristic has two drawbacks. First, the dual variables of an LP-relaxed MKP used in heuristic decoding step are just good approximations of optimal surrogate multipliers and it may mislead the search[21]. LP-relaxed MKP used in heuristic decoding step are just approximations of optimal surrogate multipliers. And deriving optimal surrogate multipliers is a difficult task in practice[33]. Secondly, the heuristic decoding might mislead the search if the optimal solution is not very similar to the solution generated by applying the greedy heuristic[34].

In order to avoid using surrogate multipliers, we set to let every

observe uniform distribution on

, where . The profits of the original MKP are biased by multiplying weights:


as mentioned in Section II, all feasible solutions of this biased MKP are feasible to (1). In decoding heuristic, we also use first-fit strategy, i.e., the items are sorted by decreasing order of (not by pseudo-utility ratio in (4)) and traversed. Each item’s variable is set to if no resource constraint is violated. The computational effort of the decoder is also in total.

This form of is similar to the idea of Random-key Representation[35]. Surrogate multipliers can be avoided but the efficiency of the EA will be reduced[17]. To overcome this disadvantage, our thought is to obtain a “good” initial population. In the following we first introduce an idea proposed by Vasquez and Hao[21] and then propose our method.

It is well known that only relaxing the integrality constraints in an MKP may not be sufficient because its optimal solution may be far away from the optimal binary solution. However, Vasquez and Hao in [21] observed when the integrality constraints was replaced by a hyperplane constraint , the corresponding linear programming solution may often be close to the optimal binary solution. For example in [21], in (1) we let , , , , . The relax linear programming problem leads to the fractional optimal solution while the optimal binary solution is . If we replace the integrality constraints by , this linear programming problem leads to the optimal binary solution.

In the above example, if we take and substitute it to (5), the optimal binary solution can be obtained by first-fit heuristic mentioned above. Moreover, if we do not restrict as an integer, we may also obtain some corresponding linear programming solutions from which some good binary solutions may be obtained by first-fit heuristic. We use these linear programming solutions as a “good” initial population. So the disadvantage of Random-key Representation may be overcome. The experimental results presented later have confirmed this hypothesis. Naturally, the hypothesis does not exclude the possibility that there exists a certain MKP whose optimal binary solution cannot be obtained from linear programming solutions.

Inspired by this idea, initialization is guided by the LP relaxation with a hyperplane constraint. To begin with, we use some simple heuristic (such as a greedy algorithm) to obtain a 0-1 lower bound . Next, the two following problems:




are solved to obtain and .

Then, linear programming problems


are solved where is a real number generated randomly from in each computation. So the linear programming solutions are generated as the initial population.

3.2 Implementation

The scheme of the IWCEA is similar to Raidl’s WCEA. And we take the same values of and as the WCEA. The differences between the two algorithms lie in the following aspects:

  1. Each in Raidl’s WCEA observes log-normal distribution, while in IWCEA it observes a uniform distribution on , where ;

  2. Raidl’s WCEA sorts items by pseudo-utility ratios in heuristic decoding step while the IWCEA sorts items by biased profits directly;

  3. The initial population in Raidl’s WCEA is generated randomly, while in the IWCEA, linear programming problems should be solved;

  4. In the mutation step, one random of the child is reset to a new random value observing uniform distribution on instead of log-normal distribution in the IWCEA.

4 Experimental comparison

We use two test suites of MKP’s benchmark instances for experimental comparison. The first one, referred to as CB-suite in this paper, is introduced by Chu and Beasley (1998)[6] and is available in the OR-Library111http://people.brunel.ac.uk/mastjjb/jeb/info.html. This test suite contains instances for each 10 ones are combination of constraints, items, and tightness ratio . Each problem has been generated randomly such that for all . Chu and Beasley used their GA (i.e., DIH) to solve these instances and reported their results in the OR-library. The second MKP’s benchmark suite222 This suite can be downloaded from http://hces.bus.olemiss.edu/tools.html used in [17] was first referenced by [21] and originally provided by Glover and Kochenberger. These instances, called GK01 to GK11, range from to items and from to constraints. We call this suite GK-suite in this paper.

instance gap[%](and standard deviation)
CB1 5 100 0.425 0.745 0.425 0.425 0.425 0.425 0.425
(0.000) (0.210) (0.000) (0.000) (0.000) (0.000) (0.000)
CB2 5 250 0.120 1.321 0.115 0.150 0.106 0.106 0.112
(0.012) (0.346) (0.009) (0.019) (0.007) (0.006) (0.007)
CB3 5 500 0.081 2.382 0.065 0.121 0.042 0.038 0.036
(0.016) (0.657) (0.010) (0.020) (0.008) (0.003) (0.004)
CB4 10 100 0.762 1.013 0.762 0.770 0.761 0.762 0.762
(0.001) (0.163) (0.003) (0.013) (0.000) (0.003) (0.003)
CB5 10 250 0.295 1.498 0.277 0.324 0.249 0.261 0.271
(0.033) (0.225) (0.021) (0.043) (0.017) (0.008) (0.014)
CB6 10 500 0.225 2.815 0.200 0.263 0.131 0.112 0.108
(0.040) (0.462) (0.029) (0.040) (0.014) (0.007) (0.002)
CB7 30 100 1.372 1.800 1.338 1.401 1.319 1.336 1.276
(0.134) (0.182) (0.123) (0.073) (0.093) (0.091) (0.077)
CB8 30 250 0.608 2.076 0.611 0.599 0.535 0.519 0.525
(0.048) (0.346) (0.072) (0.059) (0.031) (0.013) (0.002)
CB9 30 500 0.429 3.267 0.376 0.463 0.306 0.288 0.296
(0.058) (0.442) (0.037) (0.056) (0.024) (0.012) (0.012)
GK01 15 100 0.377 0.683 0.384 0.336 0.308 0.270 0.325
(0.068) (0.098) (0.080) (0.074) (0.077) (0.028) (0.077)
GK02 25 100 0.503 0.959 0.521 0.564 0.481 0.460 0.458
(0.062) (0.144) (0.068) (0.067) (0.045) (0.007) (0.000)
GK03 25 150 0.517 1.002 0.531 0.517 0.452 0.366 0.374
(0.060) (0.140) (0.077) (0.066) (0.042) (0.007) (0.034)
GK04 50 150 0.712 1.164 0.748 0.706 0.669 0.528 0.527
(0.090) (0.143) (0.098) (0.079) (0.081) (0.021) (0.027)
GK05 25 200 0.462 1.124 0.552 0.493 0.397 0.294 0.289
(0.072) (0.153) (0.118) (0.087) (0.046) (0.004) (0.012)
GK06 50 200 0.703 1.236 0.751 0.714 0.611 0.429 0.417
(0.070) (0.141) (0.108) (0.077) (0.060) (0.018) (0.015)
GK07 25 500 0.523 1.468 0.651 0.496 0.382 0.093 0.111
(0.088) (0.092) (0.087) (0.089) (0.082) (0.004) (0.005)
GK08 50 500 0.749 1.517 0.835 0.749 0.534 0.166 0.169
(0.086) (0.109) (0.125) (0.085) (0.066) (0.006) (0.013)
GK09 25 1500 0.890 2.312 1.064 0.695 0.558 0.029 0.030
(0.075) (0.113) (0.133) (0.070) (0.042) (0.001) (0.001)
GK10 50 1500 1.101 1.883 1.177 0.950 0.727 0.052 0.053
(0.065) (0.076) (0.082) (0.090) (0.070) (0.003) (0.002)
GK11 100 2500 1.237 1.677 1.246 1.161 0.867 0.052 0.056
(0.060) (0.056) (0.067) (0.063) (0.061) (0.002) (0.002)
average 0.605 1.597 0.631 0.595 0.493 0.329 0.331
(0.057) (0.215) (0.068) (0.057) (0.043) (0.012) (0.015)
Table 1: Average gaps of best solutions and their standard deviations of the IWCEA and other EAs

Although some commercial integral linear programming (ILP) solvers, such as CPLEX, can solve ILP problems with thousands of integer variables or even more, it seems that the MKP remains rather difficult to handle when an optimal solution is wanted. To CB-suit, the results in [6] showed that major instances of this suit cannot be solved in a reasonable amount of CPU time and memory by CPLEX. To GK-suit, which includes still more difficult instances with up to , Fréville (2004) in [7] mentioned that CPLEX cannot tackle these instances. Therefore, it appears that the MKP continues to be a challenging problem for commercial ILP solvers.

The best known solutions to these benchmarks, as far as we known, were obtained by Vasquez and Hao (2001)[21] and was improved by Vasquez and Vimont (2005)[22]. Their method is based on tabu search and time-consuming compared with EA.

Raidl and Gottlieb (2005)[17] tested six different variants of EAs, which are called Permutation Representation (PE), Ordinal Representation (OR), Random-Key Representation (RK), Weight-Biased Representation (WB), i.e. Raidl’s WCEA, and Direct Representation (DI and DIH). We compare the IWCEA with these EAs except DIH first. We use all GK-suite and draw out nine instances (called CB1 to CB9) from CB-suite, which are the first instances with for each combination of and .

For a solution , the gap is defined as:

where is the optimum of the LP-relaxed problem to measure the quality of .

We implement the IWCEA on a personal computer (Inter Core Duo T5800, 2 GHz, 1.99 GB main memory, Windows XP) using DEV-C++. The initial population is generated by MATLAB. The population size is 100, and each run was terminated after created solution candidates; rejected duplicates were not counted.

Table 1 shows the average gaps of the final solutions and their standard deviations obtained from independent 30 runs per problem instance obtained by the IWCEA and other six variants. The results of other six variants come from [17]. The results in Table 1 show that the IWCEA outperformed PE, OR, RK, and DI. On all instances but CB2, CB4, CB5, and GK01, the IWCEA performed equal or better than Raidl’s WCEA. Especially in GK02 to GK11, the IWCEA performed much better than Raidl’s method.

Table 1 also shows that the IWCEA performed averagely slightly worse than DIH. But we will point out that can yield better results than DIH in some instances. Since the best results can be obtained by CPLEX in CB-suite when , , and , we tested the other 180 instances in CB-suite. Each instance was computed 30 times and the best results were compared with the results reported in OR-library. The statistical data of the numbers that the IWCEA yielded better, equal or worse results than the results reported in OR-library is shown in Table 2. Tables 3 to 8 show the comparison of each instance. These tables show that the results of more than 50% instances can be improved by the IWCEA.

number of the instance better equal worse
30 100 30 2 28 0
10 250 30 12 16 2
30 250 30 15 10 5
5 500 30 19 9 2
10 500 30 23 4 3
30 500 30 21 4 5
Total 180 92 71 17
Table 2: The statistical data of the numbers that the IWCEA yielded better, equal and worse results than the results reported in OR-library

5 Conclusion

We have proposed an IWCEA for solving multidimensional knapsack problems. This IWCEA has been different from Raidl’s WCEA in the ways that surrogate multipliers are not used and a heuristic method is incorporated in initialization. Experimental comparison has shown that the IWCEA can yield better results than Raidl’s WCEA in [26] and better results than the ones reported in the OR-library to some existing benchmarks.


  • [1] M.R. Garey and D.S. Johnson, Computers and intractability: A guide to the theory of NP-completeness, New York: W. H. Freeman & Co., 1979.
  • [2] H.M. Markowitz and A.S. Manne, On the solution of discrete programming problems, Econometrica, 1957; 25: 84-110.
  • [3] B. Gavish and H. Pirkul, Allocation of databases and processors in a distributed computing system, in J. Akoka (ed.) Management of Distributed Data Proc., North-Holland, 1982, pp. 215-231.
  • [4] W. Shih, A Branch and Bound Method for the Multiconstraint Zero-One Knapsack Problem, J. Oper. Res. Society, 1979; 30: 369-378.
  • [5] P.C. Gilmore and R.E. Gomory, The theory and computation of knapsack functions, Oper. Res., 1966; 14: 1045-1075.
  • [6] P.C. Chu and J.E. Beasley, A genetic algorithm for the multidimensional knapsack problem, J. Heuristics, 1998; 4: 63-86.
  • [7] A. Fréville, The multidimensional 0-1 knapsack problem: An overview, Eur. J. Oper. Res., 2004; 155: 1-21.
  • [8] H. Kellerer, U. Pferschy, and D. Pisinger. Knapsack problems, Berlin: Springer, 2004.
  • [9] H. Li, Y. Jiao, L. Zhang, and Z. Gu, Genetic algorithm based on the orthogonal design for multidimensional knapsack problems, Advances in Natural Computation, Springer Berlin/Heidelberg, 2006: 696-705.
  • [10] M. Kong, P. Tian, and Y. Kao, A new ant colony optimization algorithm for the multidimensional knapsack problem, Comput. Oper. Res., 2008; 35: 2672-2683.
  • [11] S. Hanafi, C. Wilbaut, Scatter search for the 0-1 multidimensional knapsack problem, J. Math. Model. Algor., 2008; 7: 143-159.
  • [12] V. Boyer, M. Elkihel, and D. El Baz, Heuristics for the 0-1 multidimensional knapsack problem, Eur. J. Oper. Res., 2009; 199: 658-664.
  • [13] K. Fleszar and K.S. Hindi, Fast, effective heuristics for the 0-1 multi-dimensional knapsack problem, Comput. Oper. Res., 2009; 36: 1602-1607.
  • [14] J. Puchinger, G.R. Raidl, and M. Gruber, Cooperating memetic and branch-and-cut algorithms for solving the multidimensional knapsack problem, in Proc. the 6th Metaheuristics Int. Conf. 2005, pp. 775-780.
  • [15] D. Zou, L. Gao, S. Li, et.al. Solving 0-1 knapsack problem by a novel global harmony search algorithm, Appl. Soft Comput. 2011; 11: 1556-1564.
  • [16] A. Fréville, S. Hanafi, The multidimensional 0-1 knapsack problem–Bounds and computational aspects, Ann. Oper. Res., 2005; 139: 195-227.
  • [17] G.R. Raidl and J. Gottlieb, Empirical analysis of locality, heritability and heuristic bias in evolutionary algorithms: a case study for the multidimensional knapsack problem, Evolut. Comput., 2005; 13: 441-475.
  • [18] Z. Ren, Z. Feng, and A. Zhang. Fusing ant colony optimization with Lagrangian relaxation for the multiple-choice multidimensional knapsack problem, Information Sciences, 2012; 182: 15-29.
  • [19] M. Hifi, H. M’Halla, and S. Sadfi, An exact algorithm for the knapsack sharing problem, Comput. Oper. Res., 2005; 32: 1311-1324.
  • [20] K. Khalili-Daghani, M. Nojavan, and M. Tavana. The multi-start Partial Bound Enumeration method versus the efficient epsilon-constraint method, Appl. Soft Comput. 2013; 13: 1627-1638.
  • [21]

    M. Vasquez and J.-K. Hao, A hybrid approach for the 0-1 multidimensional knapsack problem, in Proc. the 17th Int. Joint Conf. on Artificial Intelligence, 2001, pp. 328-333.

  • [22] M. Vasquez and Y. Vimont, Improved results on the 0-1 multidimensional knapsack problem, Eur. J. Oper. Res., 2005; 165: 70-81.
  • [23] Y. Vimont, S. Boussier, and M. Vasquez. Reduced costs propagation in an efficient implicit enumeration for the 0-1 multidimensional knapsack problem, J. Comb. Optim. 2008; 15: 165-178.
  • [24] S. Hanafi, C. Wilbaut, Improved convergent heuristic for the 0-1 multidimensional knapsack problem, Ann. Oper. Res. 2011; 183: 125-142.
  • [25] S. Boussier, M. Vasquez, Y. Vimont, S. Hanafi, and P. Michelon, A multi-level search strategy for the 0-1 multidimensional knapsack problem, Discrete Appl. Math., 2010; 158: 97-109.
  • [26] G.R. Raidl, Weight-codings in a genetic algorithm for the multiconstraint knapsack problem, in Proc. of CEC99. IEEE Press, 1999, pp. 596-603.
  • [27] C.C. Palmer and A. Kershenbaum, Representing trees in genetic algorithms, in Proc. the 1st IEEE Int. Conf. Evol. Comput., Orlando, FL, 1994, pp. 379-384.
  • [28] K. Capp and B. Julstrom, A weight-coded genetic algorithm for the minimum weight triangulation problem, in Proc. 1998 ACM Symp. on Applied Computing, ACM Press, 1998, pp. 327-331.
  • [29] B. Julstrom, Comparing decoding algorithms in a weight-coded GA for TSP, in Proc. 1998 ACM Symp. on Applied Computing, ACM Press, 1998, pp. 313-317.
  • [30] G.R. Raidl, A weight-coded genetic algorithm for the multiple container packing problem, in Proc. the 14th ACM Symp. on Applied Computing, San Antonio, TX, 1999, pp. 291-296.
  • [31] F. Glover, Surrogate constraint duality in mathematical programming, Oper. Res., 1975; 23: 434-451.
  • [32] S. Hanafi, A. Féville, An efficient tabu search approach for the 0-1 multidimensional kanpsack problem, Eur. J. Oper. Res., 1998; 106: 659-675.
  • [33] B. Gavish and H. Pirkul, Efficient algorithms for solving multiconstraint zero-one knapsack problems to optimality, Math. Program., 1985; 31: 78-105.
  • [34]

    F. Rothlauf and D.E. Goldberg, Redundant representation in evolutionary computation, Evolut. Comput., 2003; 11: 381-415.

  • [35] R. Hinterding, Representation, constraint satisfaction and the knapsack problem, in Proc. 1999 IEEE Congr. on Evolutionary Computation, 1999, pp. 1286-1292.