Multi-objective Differential Evolution with Helper Functions for Constrained Optimization

09/30/2015
by   Tao Xu, et al.
Aberystwyth University
0

Solving constrained optimization problems by multi-objective evolutionary algorithms has scored tremendous achievements in the last decade. Standard multi-objective schemes usually aim at minimizing the objective function and also the degree of constraint violation simultaneously. This paper proposes a new multi-objective method for solving constrained optimization problems. The new method keeps two standard objectives: the original objective function and the sum of degrees of constraint violation. But besides them, four more objectives are added. One is based on the feasible rule. The other three come from the penalty functions. This paper conducts an initial experimental study on thirteen benchmark functions. A simplified version of CMODE is applied to solving multi-objective optimization problems. Our initial experimental results confirm our expectation that adding more helper functions could be useful. The performance of SMODE with more helper functions (four or six) is better than that with only two helper functions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/02/2019

Push and Pull Search Embedded in an M2M Framework for Solving Constrained Multi-objective Optimization Problems

In dealing with constrained multi-objective optimization problems (CMOPs...
12/21/2016

Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit

Multi-objective evolutionary algorithms (MOEAs) have achieved great prog...
10/18/2019

Efficient Computation of Probabilistic Dominance in Robust Multi-Objective Optimization

Real-world problems typically require the simultaneous optimization of s...
03/10/2018

Enhanced Optimization with Composite Objectives and Novelty Selection

An important benefit of multi-objective search is that it maintains a di...
05/03/2020

Hierarchical Bayesian Approach for Improving Weights for Solving Multi-Objective Route Optimization Problem

The weighted sum method is a simple and widely used technique that scala...
06/09/2017

Solving Configuration Optimization Problem with Multiple Hard Constraints: An Enhanced Multi-Objective Simulated Annealing Approach

This research concerns a type of configuration optimization problems fre...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Optimization problems in real-world applications are usually subject to different kinds of constraints. These problem are called constraint optimization problems (COPs). In the minimization case, the COP is formulated as follows:

(1)
(2)

where is a bounded domain in , given by

(3)

where is the lower boundary and the upper boundary. is the th inequality constraint while is the th equality constraints. The feasible region is defined as:

If an inequality constraint meets (where at any point , we say it is active at . All equality constraints (where are considered active at all points of .

Many constraint-handling techniques have been proposed in literature. The most popular constraint-handling techniques include penalty function methods, the feasibility rule, multi-objective optimization and repair methods. A detailed introduction to this topic can be found in several comprehensive surveys [1, 2, 3].

This paper focuses on multi-objective optimization methods, which are regarded as one of the most promising ways for dealing with COPs [4]. The technique is based on using multi-objective optimization evolutionary algorithms (MOEAs) for solving single-objective optimization problems. This idea can be traced back to 1990s [5] and it is also termed multi-objectivization [6]. Multi-objective methods separate the objective function and the constraint violation degrees into different fitness functions. This is unlike penalty functions, which combine them into a single fitness function. The main purpose of using multi-objective optimization is to relax the requirement of setting or fine-tuning parameters, as happens with penalty function methods.

The research of dealing with COPs using MOEAs has made significant achievements since 2000. There exist variant methods of applying MOEAs for solving COPs. According to the taxonomy proposed in [7, 4]

, these methods are classified into five categories:

  1. Bi-objective feasible complaints methods: methods that transform the original single-objective COP into an unconstrained bi-objective optimization problem, where the first objective is the original objective function and the second objective is a measure of the constraint violations. During solving the multi-objective problem, selection always prefers a feasible solution over an infeasible solution. [8, 9] are two examples of bi-objective feasible complaints methods. However, the number of research in this category is very limited.

  2. Bi-objective non-feasible complaints methods: like the first category, the original single-objective COP into an unconstrained bi-objective optimization problem. But during solving the latter problem, selection is designed based on the dominance relation and doesn’t prefer a feasible solution over an infeasible solution. A lot of work belong to this category, such as [10, 11, 12, 13, 14, 15, 16, 17, 18].

  3. Multi-objective feasible complaints methods: methods that transform the original single-objective COP into an unconstrained multi-objective optimization problem, which includes objectives. The first objective is the original objective. The other objectives correspond to each constraint in the COP. During solving the multi-objective problem, selection always prefer a feasible solution over an infeasible solution. The work in this category includes [19, 20, 21, 22].

  4. Multi-objective non-feasible complaints methods: like the third category, the original single-objective COP is transformed into an unconstrained multi-objective optimization. But during solving the multi-objective problem, selection doesn’t prefer a feasible solution over an infeasible solution. The idea was used in [23, 24, 25].

  5. Other multi-objective methods: methods that transform the original single-objective COP into an unconstrained multi-objective optimization problem, but some or all of the objectives in the latter problem are different from the original objective function and the degrees of the constraint violation. For example, the first objective in [26] is the original objective function with addition of noise. The second objective equals to the original objective function but considering relaxed constraints. This category is less studied than others. The main problem is how to construction helpful objectives.

The multi-objective method in this paper belongs to the fifth category. Our method still keeps the standard objectives: the objective function and also the total degree of constraint violation. But besides them, more objectives are added. One is based on the feasible rule. The others are from the penalty functions. In this way a new multi-objective model is constructed for constrained optimization. A natural question is to investigate whether adding more objectives can improve the performance of MOEAs for solving constrained optimization problems. This paper conducts an experimental study. A simplified version of CMODE [27] is applied to solving for multi-objective optimization problems. Our initial experimental result is positive. It confirms our expectation that adding helper functions could be useful.

The rest of paper is organized as follows. Section II reviews differential evolution. Section III proposes a new multi-objective model for constrained optimization. Section IV describes a multi-objective differential evolution algorithm with helper functions. Section V gives experiment results and compares the proposed approach with different numbers of helper functions. Section VI concludes the paper.

Ii Differential Evolution

Differential evolution (DE) was proposed by Storn and Price [28], which is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use [29].

In DE, a population is represented by

-dimensional vectors:

(4)
(5)

where represents the generation counter. Population size does not change during the minimization process. The initial vectors are chosen randomly from . The formula below shows how to generate an initial individual at random:

(6)

where is the random number .

There exist several variants of DE. The original DE algorithm [28] is utilized in this paper. This DE algorithm consists of three operations: mutation, crossover and selection, which are described as follows.

  • Mutation: for each target where a mutant vector is generated by

    (7)

    where random indexes are mutually different integers. They are also chosen to be different from the running index . is a real and constant factor from which controls the amplification of the differential variation . In case is out of the interval , the mutation operation is repeated until falls in .

  • Crossover: in order to increase population diversity, crossover is also used in DE. The trial vector is generated by mixing the target vector with the mutant vector . Trial vector is constructed as follows:

    (8)

    where is a uniform random number from . Index is randomly chosen from . denotes the crossover constant which has to be determined by the user. In addition, the condition “” is used to ensure the trial vector gets at least one parameter from vector .

  • Selection: a greedy criterion is used to decide which offspring generated by mutation and crossover should be selected to population . Trail vector is compared to target vector , then the better one will be reserved to the next generation.

Iii Multi-objective Model with More Helper Functions for Constrained Optimization

Without loss of generality, consider a minimization problem with only two constraints:

(9)

A multi-objective method transfers the above single-objective optimization problem with constraints into a multi-objective optimization problem without constraints.

The first fitness function is the original objective function without considering constraints:

(10)

Notice that the optimal solution to minimizing might be different from that to the original constrained optimization problem (9), therefore is only a helper fitness function.

The second objective is related to constraint violation. Define the degree of violating each constraint as

(11)
(12)

where is the tolerance allowed for the equality constraint.

The second fitness function is defined by the sum of constraint violation degrees:

(13)

The above two objectives are widely used in in multi-objective methods for constrained optimization [4]. An interesting question is whether using more fitness fitness function can improve the performance of MOEAs? This paper aims to investigate the relationship between the performance of multi-objective and the number of objectives used.

A problem is how to construct new helper functions. This paper designs two types of general purpose fitness functions, which are constructed from the feasible rule and the penalty method. Any problem-specific knowledge can be used in designing helper functions. For example, inspired from a greedy algorithm, several helper functions are specially constructed for solving the 0-1 knapsack problem in[30].

Besides the original objective function and the sum of constraint violation degrees , the third fitness function is designed by the feasible rule [31]. During pairwise-comparing individuals:

  1. when two feasible solutions are compared, the one with a better objective function profit is chosen;

  2. when one feasible solution and one infeasible solution are compared, the feasible solution is chosen;

  3. when two infeasible solutions are compared, the one with smaller constraint violation is chosen.

According to the feasible rule, the third fitness function is constructed as follows: for an individual in a population ,

(14)

In the above, is the “worst” fitness of feasible individuals in population , given by

(15)

Since the reference point depends on population , thus for the same , the values of in different populations might be different. However the optimal feasible solution to minimizing always is the best in any population. Thus the optimal feasible solution to minimizing is exactly the same as that to the constrained optimization problem. Based on this reason, is called an equivalent fitness function.

Inspired from the penalty function method, more fitness functions with different penalty coefficients are constructed as follows:

(16)
(17)
(18)

where are penalty coefficient. If set , then represents a death penalty to infeasible solutions. Such a function is a helper function because minimizing might not lead to the optimal feasible solution.

In summary, the original constrained optimization problem is transferred into a multi-objective optimization problem:

(19)

which consists of one equivalent function and five helper functions. This new multi-objective model for constrained optimization is the main contribution of this paper. The model potentially may include many objectives inside.

Iv Multi-objective Differential Evolution for Constrained Optimization

The CMODE framework [27] is chosen to solve the above multi-objective optimization problem (19). Different from normal MOEAs, CMODE is specially designed for solving constrained optimization problems. Hence it is expected that CMODE is efficient in solving the multi-objective optimization problem (19). A comparison study of several MOEAs is still undergoing.

CMODE [27] originally is applied to solving a bi-objective optimization problem which consists of only two objectives: and . However, it is easy to reuse the existing framework of CMODE to multi-objective optimization problems. Due to time limitation, a simplified CMODE algorithm is implemented in this paper. In order to distinguish the original CMODE, the simplified version is abbreviated by SMODE. The algorithm is described as follows.

0:  : population size;: the number of individuals involved in DE operations : the maximum number of fitness evaluations
1:  randomly generate an initial population with population size ;
2:  evaluate the values of and for each individual in the initial population, and then calculate the value of where ;
3:  set ; // denotes the number of fitness evaluations;
4:  for  do
5:     choose individuals (denoted by ) from population ;
6:     let ;
7:     for each individual in set , an offspring is generated by using DE mutation and crossover operations as explained in Section II. Then children (denoted by ) are generated from ;
8:     evaluate the values of and for each individual in and then obtain the value of where ;
9:     set ;
10:     identify all nondominated individuals in (denoted by );
11:     for each individual in  do
12:        find all individual(s) in dominated by ;
13:        randomly replace one of these dominated individuals by ;
14:     end for
15:     let ;
16:  end for
17:  return  the best found solution

The algorithm is explained step-by-step in the following. At the beginning, an initial population is chosen at random, where all initial vectors are chosen randomly from .

At each generation, parent population is split into two groups: one group with parent individuals that are used for DE operations (set ) and the other group (set ) with individuals that are not involved in DE operations. DE operations are applied to selected children (set ) and then generate children (set ).

Selection is based on the dominance relation. First nondominated individuals (set ) are identified from children population . Then these individual(s) will replace the dominated individuals in (if exists). As a result, population set is updated. Merge population set with those parent individuals that are involved in DE operation (set ) together and form the next parent population . The procedure repeats until reaching the maximum number of evaluations. The output is the best found solution by DE.

Due to time limitation, our algorithm doesn’t implement a special mechanism used in CMODE: the infeasible solution replacement mechanism. The idea of this replacement mechanism is that, provided that a children population is composed of only infeasible individuals, the “best” child, who has the lowest degree of constraint violation, is stored into an archive. After a fixed interval of generations, some randomly selected infeasible individuals in the archive will replace the same number of randomly selected individuals in the parent population. Although this significantly influence s the efficiency of our algorithm, our study is still meaningful since our goal is to investigate whether using more objectives may improve the performance of MOEAs for constrained optimization.

V Experiments and Results

V-a Experimental Settings

In order to study the relationship between the performance of SMODE and the number of helper functions, thirteen benchmark functions were employed as the instances to perform experiments. These benchmarks have been used to test the performance of MOEAs for constrained optimization in [12] and are a part of benchmark collections in IEEE CEC 2006 special session on constrained real-parameter optimization [32]. Their detailed information is provided in Table I, where is the number of decision variables, stands for the number of linear inequalities constraints, the number of nonlinear equality constraints, nonlinear inequalities constraints. denotes the ratio between the sizes of the entire search space and feasible search space and is the number of active constraints at the optimal solution.

Fcn Type of
g01 13 quadratic 9 0 0 6
g02 20 nonlinear 1 0 1 1
g03 10 nonlinear 0 1 0 1
g04 5 quadratic 0 0 6 2
g05 4 nonlinear 2 3 0 3
g06 2 nonlinear 0 0 2 2
g07 10 quadratic 3 0 5 6
g08 2 nonlinear 0 0 2 0
g09 7 nonlinear 0 0 4 2
g10 8 linear 3 0 3 3
g11 2 quadratic 0 1 0 1
g12 3 quadratic 0 0 0
g13 5 nonlinear 0 3 0 3
TABLE I: Summary of 13 Benchmark Functions

SMODE contains several parameters which are the population size , the scaling factor in mutation, the crossover control parameter . Usually, is set within and mostly from to ; is also chosen from and higher values can produce better results in most cases. In our experiments, set as 0.6, as 0.95. The population size . The tolerance value for the equality constraints was set to 0.0001. Set penalty coefficients . The maximum number of fitness evaluations is set as .

As suggested in [32], 25 independent runs are set for each benchmark function.

V-B Initial Results of Proposed Algorithm

Initial experiments have been completed for only. Table II shows the result of function error values achieved by SMODE with only two helper functions on thirteen benchmark functions. In the table, NA means that no feasible solution was found. SMODE may find a feasible solution only on one benchmark function g06. The result achieved is worse than that achieved by CMODE because the infeasible solution replacement mechanism is utilized in SMODE. If this mechanism is added, SMODE is the same as CMODE and their performances could be the same. This is our ongoing work.

Fcn best median worst  mean distance Std
g01 NA NA NA NA NA
g02 NA NA NA NA NA
g03 NA NA NA NA NA
g04 NA NA NA NA NA
g05 NA NA NA NA NA
g06 8.9880E+02 2.7278E+03 NA NA NA
g07 NA NA NA NA NA
g08 NA NA NA NA NA
g09 NA NA NA NA NA
g10 NA NA NA NA NA
g11 NA NA NA NA NA
g12 NA NA NA NA NA
g13 NA NA NA NA NA
TABLE II: Function Error Values Achieved by SMODE with Only two helper Functions with FES

Table III gives the result of function error values achieved by SMODE with four helper functions on thirteen benchmark functions. The result achieved by SMODE with four helper functions is better than that with only two helper functions. SMODE can find feasible solutions on seven benchmark functions g2, g4, g6, g8, g9, g11, g12. However, the result achieved is still worse than that achieved by CMODE because the infeasible solution replacement mechanism is utilized in SMODE.

Fcn best median worst  mean distance Std
g01 NA NA NA NA NA
g02 5.4500E-01 6.1341E-01 7.7128E-01 8.5337E-01 9.4740E-01
g03 NA NA NA NA NA
g04 3.5549E+02 5.9871E+02 8.8085E+02 1.0152E+02 1.3621E+02
g05 NA NA NA NA NA
g06 3.3688E+02 1.3344E+03 NA NA NA
g07 NA NA NA NA NA
g08 1.1730E-03 1.0102E-02 5.0376E-02 1.1029E-02 1.4452E-02
g09 8.0239E+01 3.1650E+02 6.0861E+02 1.0339E+02 1.2915E+02
g10 NA NA NA NA NA
g11 1.3360E-03 1.2956E-01 NA NA NA
g12 6.2614E-05 3.9500E-04 1.0323E-02 2.1810E-03 2.8250E-03
g13 NA NA NA NA NA
TABLE III: Function Error Values Achieved by SMODE with 4 help functions with FES

Table IV is the result of function error values achieved by SMODE with four helper functions on thirteen benchmark functions. The result achieved by SMODE with four helper functions is similar to that with only six helper functions. SMODE also can find feasible solutions on seven benchmark functions g2, g4, g6, g8, g9, g11, g12. However the difference between four and six helper functions is very small. A possible explanation is that play a similar rule as . All these three functions belong to the class of penalty functions. Therefore it might be better if helper functions are designed from different backgrounds.

Fcn best median worst  mean distance Std
g01 NA NA NA NA NA
g02 5.4717E-01 5.8388E-01 6.2491E-01 2.6393E-01 3.1753E-01
g03 NA NA NA NA NA
g04 4.4758E+02 6.0336E+02 7.7147E+02 8.4857E+01 9.7692E+01
g05 NA NA NA NA NA
g06 4.1811E+02 2.8303E+03 5.1241E+02 1.1938E+03 1.4209E+03
g07 NA NA NA NA NA
g08 1.4000E-05 2.6850E-03 2.0954E-02 4.4980E-03 5.9500E-03
g09 1.0152E+02 3.4975E+02 9.3096E+02 9.2581E+01 1.5020E+02
g10 NA NA NA NA NA
g11 1.2160E-03 1.3616E-01 NA NA NA
g12 5.4000E-05 2.6000E-04 1.0015E-02 2.7600E-03 3.2760E-03
g13 NA NA NA NA NA
TABLE IV: Function Error Values Achieved by SMODE with 6 help functions with FES

In summary, our initial experimental results confirm that the performance of SMODE with more helper functions (four or six) is better than that of that with only two helper functions. Currently SMODE performs worse than CMODE. But if the infeasible solution replacement mechanism is added to SMODE, SMODE is the same as CMODE and it is expected that their performances could be the same.

Vi Conclusion and Future Work

This paper proposes a new multi-objective method for solving constrained optimization problems. The new method keeps two standard objectives: the objective function and also the sum of degrees of constraint violation. But besides them, four more objectives are added. One is based on the feasible rule. The other three come from the penalty functions.

This paper conducts an initial experimental study on thirteen benchmark functions. A simplified version of CMODE [27] is applied to solving multi-objective optimization problems. Our initial experimental results are positive. They confirm our expectation that adding helper functions could be useful. The performance of SMODE with more helper functions (four or six) is better than that with only two helper functions.

Due to time limitation, a key part in CMODE, the infeasible solution replacement mechanism, is not implemented in SMODE. Thus the result achieved by SMODE is worse than that achieved by CMODE. But if this mechanism is added to SMODE, SMODE is the same as CMODE and their performances could be the same. A study on the original CMODE with different numbers of helper functions is our ongoing work.

References

  • [1] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithms for constrained parameter optimization problems,” Evolutionary Computation, vol. 4, no. 1, pp. 1–32, 1996.
  • [2] C. A. Coello Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11-12, pp. 1245–1287, 2002.
  • [3] E. Mezura-Montes and C. A. Coello Coello, “Constraint-handling in nature-inspired numerical optimization: past, present and future,” Swarm and Evolutionary Computation, vol. 1, no. 4, pp. 173–194, 2011.
  • [4] C. Segura, C. A. C. Coello, G. Miranda, and C. León, “Using multi-objective evolutionary algorithms for single-objective optimization,” 4OR, vol. 11, no. 3, pp. 201–228, 2013.
  • [5] S. J. Louis and G. Rawlins, “Pareto optimality, GA-easiness and deception,” in

    Proceedings of 5th International Conference on Genetic Algorithms

    .   Morgan Kaufmann, 1993, pp. 118–123.
  • [6] J. D. Knowles, R. A. Watson, and D. W. Corne, “Reducing local optima in single-objective problems by multi-objectivization,” in Evolutionary Multi-Criterion Optimization.   Springer, 2001, pp. 269–283.
  • [7] E. Mezura-Montes and C. A. C. Coello, “Constrained optimization via multiobjective evolutionary algorithms,” in Multiobjective Problem Solving from Nature, J. Knowles, D. Corne, K. Deb, and D. Chair, Eds.   Springer Berlin Heidelberg, 2008, pp. 53–75.
  • [8] Y. Wang, D. Liu, and Y.-M. Cheung, “Preference bi-objective evolutionary algorithm for constrained optimization,” in Computational Intelligence and Security.   Springer, 2005, pp. 184–191.
  • [9] Y. Wang, Z. Cai, G. Guo, and Y. Zhou, “Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 3, pp. 560–575, 2007.
  • [10] P. D. Surry and N. J. Radcliffe, “The COMOGA method: constrained optimisation by multi-objective genetic algorithms,” Control and Cybernetics, vol. 26, pp. 391–412, 1997.
  • [11] Y. Zhou, Y. Li, J. He, and L. Kang, “Multi-objective and MGG evolutionary algorithm for constrained optimisation,” in Proceedings of 2003 IEEE Congress on Evolutionary Computation.   Canberra, Australia: IEEE Press, 2003, pp. 1–5.
  • [12] Z. Cai and Y. Wang, “A multiobjective optimization-based evolutionary algorithm for constrained optimization,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 658–675, 2006.
  • [13] K. Deb, S. Lele, and R. Datta, “A hybrid evolutionary multi-objective and SQP based procedure for constrained optimization,” in Advances in Computation and Intelligence, L. Kang, Y. Liu, and S. Zeng, Eds.   Springer, 2007, pp. 36–45.
  • [14] S. Venkatraman and G. G. Yen, “A generic framework for constrained optimization using genetic algorithms,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 4, pp. 424–435, 2005.
  • [15] T. Ray, H. Singh, A. Isaacs, and W. Smith, “Infeasibility driven evolutionary algorithm for constrained optimization,” in Constraint-Handling in Evolutionary Optimization, E. Mezura-Montes, Ed.   Springer Berlin Heidelberg, 2009, vol. 198, pp. 145–165.
  • [16] H. Jain and K. Deb, “An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part ii: handling constraints and extending to an adaptive approach,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 602–622, 2014.
  • [17] S. Zapotecas Martinez and C. Coello Coello, “A multi-objective evolutionary algorithm based on decomposition for constrained multi-objective optimization,” in Proceedings of 2014 IEEE Congress on Evolutionary Computation.   IEEE, 2014, pp. 429–436.
  • [18] W.-F. Gao, G. G. Yen, and S.-Y. Liu, “A dual-population differential evolution with coevolution for constrained optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 5, pp. 1094–1107, 2015.
  • [19] C. A. C. Coello and E. Mezura-Montes, “Handling constraints in genetic algorithms using dominance-based tournaments,” in Adaptive Computing in Design and Manufacture V.   Springer, 2002, pp. 273–284.
  • [20] F. Jiménez, A. F. Gómez-Skarmeta, and G. Sánchez, “How evolutionary multiobjective optimization can be used for goals and priorities based optimization,” in Primer Congreso Espanol de Algoritmos Evolutivos y Bioinspirados (AEB’02).   Mérida, Espana, Universidad de Extremadura, 2002, pp. 460–465.
  • [21] S. Kukkonen and J. Lampinen, “Constrained real-parameter optimization with generalized differential evolution,” in Proceedings of 2006 IEEE Congress on Evolutionary Computation.   IEEE, 2006, pp. 207–214.
  • [22] W. Gong and Z. Cai, “A multiobjective differential evolution algorithm for constrained optimization,” in Proceedings of IEEE Congress on Evolutionary Computation.   IEEE, 2008, pp. 181–188.
  • [23] T. Ray, T. Kang, and S. K. Chye, “An evolutionary algorithm for constrained optimization,” in Proceedings of 2000 Genetic and Evolutionary Computation Conference.   San Francisco: Morgan Kaufmann, 2000, pp. 771–777.
  • [24] A. H. Aguirre, S. B. Rionda, C. A. Coello Coello, G. L. Lizárraga, and E. M. Montes, “Handling constraints using multiobjective optimization concepts,” International Journal for Numerical Methods in Engineering, vol. 59, no. 15, pp. 1989–2017.
  • [25]

    J. J. Liang and P. Suganthan, “Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism,” in

    Proceedings of 2006 IEEE Congress on Evolutionary Computation.   IEEE, 2006, pp. 9–16.
  • [26] S. Watanabe and K. Sakakibara, “Multi-objective approaches in a single-objective optimization environment,” in Proceedings of 2005 IEEE Congress on Evolutionary Computation, vol. 2.   IEEE, 2005, pp. 1714–1721.
  • [27] Y. Wang and Z. Cai, “Combining multiobjective optimization with differential evolution to solve constrained optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 117–134, 2012.
  • [28]

    R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,”

    Journal of global optimization, vol. 11, no. 4, pp. 341–359, 1997.
  • [29] S. Das and P. Suganthan, “Differential evolution: A survey of the state-of-the-art,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, Feb 2011.
  • [30] J. He, B. Mitavskiy, and Y. Zhou, “A theoretical assessment of solution quality in evolutionary algorithms for the knapsack problem,” in Proceedings of 2014 IEEE Congress on Evolutionary Computation.   IEEE, 2014, pp. 141–148.
  • [31] K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2, pp. 311–338, 2000.
  • [32] J. Liang, T. P. Runarsson, E. Mezura-Montes, M. Clerc, P. Suganthan, C. C. Coello, and K. Deb, “Problem definitions and evaluation criteria for the cec 2006 special session on constrained real-parameter optimization,” Nanyang Technological University, Tech. Rep., 2006. [Online]. Available: http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/SharedDocuments/Forms/AllItems.aspx