I Introduction
Optimization problems in realworld applications are usually subject to different kinds of constraints. These problem are called constraint optimization problems (COPs). In the minimization case, the COP is formulated as follows:
(1)  
(2) 
where is a bounded domain in , given by
(3) 
where is the lower boundary and the upper boundary. is the th inequality constraint while is the th equality constraints. The feasible region is defined as:
If an inequality constraint meets (where at any point , we say it is active at . All equality constraints (where are considered active at all points of .
Many constrainthandling techniques have been proposed in literature. The most popular constrainthandling techniques include penalty function methods, the feasibility rule, multiobjective optimization and repair methods. A detailed introduction to this topic can be found in several comprehensive surveys [1, 2, 3].
This paper focuses on multiobjective optimization methods, which are regarded as one of the most promising ways for dealing with COPs [4]. The technique is based on using multiobjective optimization evolutionary algorithms (MOEAs) for solving singleobjective optimization problems. This idea can be traced back to 1990s [5] and it is also termed multiobjectivization [6]. Multiobjective methods separate the objective function and the constraint violation degrees into different fitness functions. This is unlike penalty functions, which combine them into a single fitness function. The main purpose of using multiobjective optimization is to relax the requirement of setting or finetuning parameters, as happens with penalty function methods.
The research of dealing with COPs using MOEAs has made significant achievements since 2000. There exist variant methods of applying MOEAs for solving COPs. According to the taxonomy proposed in [7, 4]
, these methods are classified into five categories:

Biobjective feasible complaints methods: methods that transform the original singleobjective COP into an unconstrained biobjective optimization problem, where the first objective is the original objective function and the second objective is a measure of the constraint violations. During solving the multiobjective problem, selection always prefers a feasible solution over an infeasible solution. [8, 9] are two examples of biobjective feasible complaints methods. However, the number of research in this category is very limited.

Biobjective nonfeasible complaints methods: like the first category, the original singleobjective COP into an unconstrained biobjective optimization problem. But during solving the latter problem, selection is designed based on the dominance relation and doesn’t prefer a feasible solution over an infeasible solution. A lot of work belong to this category, such as [10, 11, 12, 13, 14, 15, 16, 17, 18].

Multiobjective feasible complaints methods: methods that transform the original singleobjective COP into an unconstrained multiobjective optimization problem, which includes objectives. The first objective is the original objective. The other objectives correspond to each constraint in the COP. During solving the multiobjective problem, selection always prefer a feasible solution over an infeasible solution. The work in this category includes [19, 20, 21, 22].

Multiobjective nonfeasible complaints methods: like the third category, the original singleobjective COP is transformed into an unconstrained multiobjective optimization. But during solving the multiobjective problem, selection doesn’t prefer a feasible solution over an infeasible solution. The idea was used in [23, 24, 25].

Other multiobjective methods: methods that transform the original singleobjective COP into an unconstrained multiobjective optimization problem, but some or all of the objectives in the latter problem are different from the original objective function and the degrees of the constraint violation. For example, the first objective in [26] is the original objective function with addition of noise. The second objective equals to the original objective function but considering relaxed constraints. This category is less studied than others. The main problem is how to construction helpful objectives.
The multiobjective method in this paper belongs to the fifth category. Our method still keeps the standard objectives: the objective function and also the total degree of constraint violation. But besides them, more objectives are added. One is based on the feasible rule. The others are from the penalty functions. In this way a new multiobjective model is constructed for constrained optimization. A natural question is to investigate whether adding more objectives can improve the performance of MOEAs for solving constrained optimization problems. This paper conducts an experimental study. A simplified version of CMODE [27] is applied to solving for multiobjective optimization problems. Our initial experimental result is positive. It confirms our expectation that adding helper functions could be useful.
The rest of paper is organized as follows. Section II reviews differential evolution. Section III proposes a new multiobjective model for constrained optimization. Section IV describes a multiobjective differential evolution algorithm with helper functions. Section V gives experiment results and compares the proposed approach with different numbers of helper functions. Section VI concludes the paper.
Ii Differential Evolution
Differential evolution (DE) was proposed by Storn and Price [28], which is arguably one of the most powerful stochastic realparameter optimization algorithms in current use [29].
In DE, a population is represented by
dimensional vectors:
(4)  
(5) 
where represents the generation counter. Population size does not change during the minimization process. The initial vectors are chosen randomly from . The formula below shows how to generate an initial individual at random:
(6) 
where is the random number .
There exist several variants of DE. The original DE algorithm [28] is utilized in this paper. This DE algorithm consists of three operations: mutation, crossover and selection, which are described as follows.

Mutation: for each target where a mutant vector is generated by
(7) where random indexes are mutually different integers. They are also chosen to be different from the running index . is a real and constant factor from which controls the amplification of the differential variation . In case is out of the interval , the mutation operation is repeated until falls in .

Crossover: in order to increase population diversity, crossover is also used in DE. The trial vector is generated by mixing the target vector with the mutant vector . Trial vector is constructed as follows:
(8) where is a uniform random number from . Index is randomly chosen from . denotes the crossover constant which has to be determined by the user. In addition, the condition “” is used to ensure the trial vector gets at least one parameter from vector .

Selection: a greedy criterion is used to decide which offspring generated by mutation and crossover should be selected to population . Trail vector is compared to target vector , then the better one will be reserved to the next generation.
Iii Multiobjective Model with More Helper Functions for Constrained Optimization
Without loss of generality, consider a minimization problem with only two constraints:
(9) 
A multiobjective method transfers the above singleobjective optimization problem with constraints into a multiobjective optimization problem without constraints.
The first fitness function is the original objective function without considering constraints:
(10) 
Notice that the optimal solution to minimizing might be different from that to the original constrained optimization problem (9), therefore is only a helper fitness function.
The second objective is related to constraint violation. Define the degree of violating each constraint as
(11)  
(12) 
where is the tolerance allowed for the equality constraint.
The second fitness function is defined by the sum of constraint violation degrees:
(13) 
The above two objectives are widely used in in multiobjective methods for constrained optimization [4]. An interesting question is whether using more fitness fitness function can improve the performance of MOEAs? This paper aims to investigate the relationship between the performance of multiobjective and the number of objectives used.
A problem is how to construct new helper functions. This paper designs two types of general purpose fitness functions, which are constructed from the feasible rule and the penalty method. Any problemspecific knowledge can be used in designing helper functions. For example, inspired from a greedy algorithm, several helper functions are specially constructed for solving the 01 knapsack problem in[30].
Besides the original objective function and the sum of constraint violation degrees , the third fitness function is designed by the feasible rule [31]. During pairwisecomparing individuals:

when two feasible solutions are compared, the one with a better objective function profit is chosen;

when one feasible solution and one infeasible solution are compared, the feasible solution is chosen;

when two infeasible solutions are compared, the one with smaller constraint violation is chosen.
According to the feasible rule, the third fitness function is constructed as follows: for an individual in a population ,
(14) 
In the above, is the “worst” fitness of feasible individuals in population , given by
(15) 
Since the reference point depends on population , thus for the same , the values of in different populations might be different. However the optimal feasible solution to minimizing always is the best in any population. Thus the optimal feasible solution to minimizing is exactly the same as that to the constrained optimization problem. Based on this reason, is called an equivalent fitness function.
Inspired from the penalty function method, more fitness functions with different penalty coefficients are constructed as follows:
(16)  
(17)  
(18) 
where are penalty coefficient. If set , then represents a death penalty to infeasible solutions. Such a function is a helper function because minimizing might not lead to the optimal feasible solution.
In summary, the original constrained optimization problem is transferred into a multiobjective optimization problem:
(19) 
which consists of one equivalent function and five helper functions. This new multiobjective model for constrained optimization is the main contribution of this paper. The model potentially may include many objectives inside.
Iv Multiobjective Differential Evolution for Constrained Optimization
The CMODE framework [27] is chosen to solve the above multiobjective optimization problem (19). Different from normal MOEAs, CMODE is specially designed for solving constrained optimization problems. Hence it is expected that CMODE is efficient in solving the multiobjective optimization problem (19). A comparison study of several MOEAs is still undergoing.
CMODE [27] originally is applied to solving a biobjective optimization problem which consists of only two objectives: and . However, it is easy to reuse the existing framework of CMODE to multiobjective optimization problems. Due to time limitation, a simplified CMODE algorithm is implemented in this paper. In order to distinguish the original CMODE, the simplified version is abbreviated by SMODE. The algorithm is described as follows.
The algorithm is explained stepbystep in the following. At the beginning, an initial population is chosen at random, where all initial vectors are chosen randomly from .
At each generation, parent population is split into two groups: one group with parent individuals that are used for DE operations (set ) and the other group (set ) with individuals that are not involved in DE operations. DE operations are applied to selected children (set ) and then generate children (set ).
Selection is based on the dominance relation. First nondominated individuals (set ) are identified from children population . Then these individual(s) will replace the dominated individuals in (if exists). As a result, population set is updated. Merge population set with those parent individuals that are involved in DE operation (set ) together and form the next parent population . The procedure repeats until reaching the maximum number of evaluations. The output is the best found solution by DE.
Due to time limitation, our algorithm doesn’t implement a special mechanism used in CMODE: the infeasible solution replacement mechanism. The idea of this replacement mechanism is that, provided that a children population is composed of only infeasible individuals, the “best” child, who has the lowest degree of constraint violation, is stored into an archive. After a fixed interval of generations, some randomly selected infeasible individuals in the archive will replace the same number of randomly selected individuals in the parent population. Although this significantly influence s the efficiency of our algorithm, our study is still meaningful since our goal is to investigate whether using more objectives may improve the performance of MOEAs for constrained optimization.
V Experiments and Results
Va Experimental Settings
In order to study the relationship between the performance of SMODE and the number of helper functions, thirteen benchmark functions were employed as the instances to perform experiments. These benchmarks have been used to test the performance of MOEAs for constrained optimization in [12] and are a part of benchmark collections in IEEE CEC 2006 special session on constrained realparameter optimization [32]. Their detailed information is provided in Table I, where is the number of decision variables, stands for the number of linear inequalities constraints, the number of nonlinear equality constraints, nonlinear inequalities constraints. denotes the ratio between the sizes of the entire search space and feasible search space and is the number of active constraints at the optimal solution.
Fcn  Type of  

g01  13  quadratic  9  0  0  6  
g02  20  nonlinear  1  0  1  1  
g03  10  nonlinear  0  1  0  1  
g04  5  quadratic  0  0  6  2  
g05  4  nonlinear  2  3  0  3  
g06  2  nonlinear  0  0  2  2  
g07  10  quadratic  3  0  5  6  
g08  2  nonlinear  0  0  2  0  
g09  7  nonlinear  0  0  4  2  
g10  8  linear  3  0  3  3  
g11  2  quadratic  0  1  0  1  
g12  3  quadratic  0  0  0  
g13  5  nonlinear  0  3  0  3 
SMODE contains several parameters which are the population size , the scaling factor in mutation, the crossover control parameter . Usually, is set within and mostly from to ; is also chosen from and higher values can produce better results in most cases. In our experiments, set as 0.6, as 0.95. The population size . The tolerance value for the equality constraints was set to 0.0001. Set penalty coefficients . The maximum number of fitness evaluations is set as .
As suggested in [32], 25 independent runs are set for each benchmark function.
VB Initial Results of Proposed Algorithm
Initial experiments have been completed for only. Table II shows the result of function error values achieved by SMODE with only two helper functions on thirteen benchmark functions. In the table, NA means that no feasible solution was found. SMODE may find a feasible solution only on one benchmark function g06. The result achieved is worse than that achieved by CMODE because the infeasible solution replacement mechanism is utilized in SMODE. If this mechanism is added, SMODE is the same as CMODE and their performances could be the same. This is our ongoing work.
Fcn  best  median  worst  mean distance  Std 
g01  NA  NA  NA  NA  NA 
g02  NA  NA  NA  NA  NA 
g03  NA  NA  NA  NA  NA 
g04  NA  NA  NA  NA  NA 
g05  NA  NA  NA  NA  NA 
g06  8.9880E+02  2.7278E+03  NA  NA  NA 
g07  NA  NA  NA  NA  NA 
g08  NA  NA  NA  NA  NA 
g09  NA  NA  NA  NA  NA 
g10  NA  NA  NA  NA  NA 
g11  NA  NA  NA  NA  NA 
g12  NA  NA  NA  NA  NA 
g13  NA  NA  NA  NA  NA 
Table III gives the result of function error values achieved by SMODE with four helper functions on thirteen benchmark functions. The result achieved by SMODE with four helper functions is better than that with only two helper functions. SMODE can find feasible solutions on seven benchmark functions g2, g4, g6, g8, g9, g11, g12. However, the result achieved is still worse than that achieved by CMODE because the infeasible solution replacement mechanism is utilized in SMODE.
Fcn  best  median  worst  mean distance  Std 
g01  NA  NA  NA  NA  NA 
g02  5.4500E01  6.1341E01  7.7128E01  8.5337E01  9.4740E01 
g03  NA  NA  NA  NA  NA 
g04  3.5549E+02  5.9871E+02  8.8085E+02  1.0152E+02  1.3621E+02 
g05  NA  NA  NA  NA  NA 
g06  3.3688E+02  1.3344E+03  NA  NA  NA 
g07  NA  NA  NA  NA  NA 
g08  1.1730E03  1.0102E02  5.0376E02  1.1029E02  1.4452E02 
g09  8.0239E+01  3.1650E+02  6.0861E+02  1.0339E+02  1.2915E+02 
g10  NA  NA  NA  NA  NA 
g11  1.3360E03  1.2956E01  NA  NA  NA 
g12  6.2614E05  3.9500E04  1.0323E02  2.1810E03  2.8250E03 
g13  NA  NA  NA  NA  NA 
Table IV is the result of function error values achieved by SMODE with four helper functions on thirteen benchmark functions. The result achieved by SMODE with four helper functions is similar to that with only six helper functions. SMODE also can find feasible solutions on seven benchmark functions g2, g4, g6, g8, g9, g11, g12. However the difference between four and six helper functions is very small. A possible explanation is that play a similar rule as . All these three functions belong to the class of penalty functions. Therefore it might be better if helper functions are designed from different backgrounds.
Fcn  best  median  worst  mean distance  Std 
g01  NA  NA  NA  NA  NA 
g02  5.4717E01  5.8388E01  6.2491E01  2.6393E01  3.1753E01 
g03  NA  NA  NA  NA  NA 
g04  4.4758E+02  6.0336E+02  7.7147E+02  8.4857E+01  9.7692E+01 
g05  NA  NA  NA  NA  NA 
g06  4.1811E+02  2.8303E+03  5.1241E+02  1.1938E+03  1.4209E+03 
g07  NA  NA  NA  NA  NA 
g08  1.4000E05  2.6850E03  2.0954E02  4.4980E03  5.9500E03 
g09  1.0152E+02  3.4975E+02  9.3096E+02  9.2581E+01  1.5020E+02 
g10  NA  NA  NA  NA  NA 
g11  1.2160E03  1.3616E01  NA  NA  NA 
g12  5.4000E05  2.6000E04  1.0015E02  2.7600E03  3.2760E03 
g13  NA  NA  NA  NA  NA 
In summary, our initial experimental results confirm that the performance of SMODE with more helper functions (four or six) is better than that of that with only two helper functions. Currently SMODE performs worse than CMODE. But if the infeasible solution replacement mechanism is added to SMODE, SMODE is the same as CMODE and it is expected that their performances could be the same.
Vi Conclusion and Future Work
This paper proposes a new multiobjective method for solving constrained optimization problems. The new method keeps two standard objectives: the objective function and also the sum of degrees of constraint violation. But besides them, four more objectives are added. One is based on the feasible rule. The other three come from the penalty functions.
This paper conducts an initial experimental study on thirteen benchmark functions. A simplified version of CMODE [27] is applied to solving multiobjective optimization problems. Our initial experimental results are positive. They confirm our expectation that adding helper functions could be useful. The performance of SMODE with more helper functions (four or six) is better than that with only two helper functions.
Due to time limitation, a key part in CMODE, the infeasible solution replacement mechanism, is not implemented in SMODE. Thus the result achieved by SMODE is worse than that achieved by CMODE. But if this mechanism is added to SMODE, SMODE is the same as CMODE and their performances could be the same. A study on the original CMODE with different numbers of helper functions is our ongoing work.
References
 [1] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithms for constrained parameter optimization problems,” Evolutionary Computation, vol. 4, no. 1, pp. 1–32, 1996.
 [2] C. A. Coello Coello, “Theoretical and numerical constrainthandling techniques used with evolutionary algorithms: A survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 1112, pp. 1245–1287, 2002.
 [3] E. MezuraMontes and C. A. Coello Coello, “Constrainthandling in natureinspired numerical optimization: past, present and future,” Swarm and Evolutionary Computation, vol. 1, no. 4, pp. 173–194, 2011.
 [4] C. Segura, C. A. C. Coello, G. Miranda, and C. León, “Using multiobjective evolutionary algorithms for singleobjective optimization,” 4OR, vol. 11, no. 3, pp. 201–228, 2013.

[5]
S. J. Louis and G. Rawlins, “Pareto optimality, GAeasiness and deception,”
in
Proceedings of 5th International Conference on Genetic Algorithms
. Morgan Kaufmann, 1993, pp. 118–123.  [6] J. D. Knowles, R. A. Watson, and D. W. Corne, “Reducing local optima in singleobjective problems by multiobjectivization,” in Evolutionary MultiCriterion Optimization. Springer, 2001, pp. 269–283.
 [7] E. MezuraMontes and C. A. C. Coello, “Constrained optimization via multiobjective evolutionary algorithms,” in Multiobjective Problem Solving from Nature, J. Knowles, D. Corne, K. Deb, and D. Chair, Eds. Springer Berlin Heidelberg, 2008, pp. 53–75.
 [8] Y. Wang, D. Liu, and Y.M. Cheung, “Preference biobjective evolutionary algorithm for constrained optimization,” in Computational Intelligence and Security. Springer, 2005, pp. 184–191.
 [9] Y. Wang, Z. Cai, G. Guo, and Y. Zhou, “Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 37, no. 3, pp. 560–575, 2007.
 [10] P. D. Surry and N. J. Radcliffe, “The COMOGA method: constrained optimisation by multiobjective genetic algorithms,” Control and Cybernetics, vol. 26, pp. 391–412, 1997.
 [11] Y. Zhou, Y. Li, J. He, and L. Kang, “Multiobjective and MGG evolutionary algorithm for constrained optimisation,” in Proceedings of 2003 IEEE Congress on Evolutionary Computation. Canberra, Australia: IEEE Press, 2003, pp. 1–5.
 [12] Z. Cai and Y. Wang, “A multiobjective optimizationbased evolutionary algorithm for constrained optimization,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 6, pp. 658–675, 2006.
 [13] K. Deb, S. Lele, and R. Datta, “A hybrid evolutionary multiobjective and SQP based procedure for constrained optimization,” in Advances in Computation and Intelligence, L. Kang, Y. Liu, and S. Zeng, Eds. Springer, 2007, pp. 36–45.
 [14] S. Venkatraman and G. G. Yen, “A generic framework for constrained optimization using genetic algorithms,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 4, pp. 424–435, 2005.
 [15] T. Ray, H. Singh, A. Isaacs, and W. Smith, “Infeasibility driven evolutionary algorithm for constrained optimization,” in ConstraintHandling in Evolutionary Optimization, E. MezuraMontes, Ed. Springer Berlin Heidelberg, 2009, vol. 198, pp. 145–165.
 [16] H. Jain and K. Deb, “An evolutionary manyobjective optimization algorithm using referencepoint based nondominated sorting approach, part ii: handling constraints and extending to an adaptive approach,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 602–622, 2014.
 [17] S. Zapotecas Martinez and C. Coello Coello, “A multiobjective evolutionary algorithm based on decomposition for constrained multiobjective optimization,” in Proceedings of 2014 IEEE Congress on Evolutionary Computation. IEEE, 2014, pp. 429–436.
 [18] W.F. Gao, G. G. Yen, and S.Y. Liu, “A dualpopulation differential evolution with coevolution for constrained optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 5, pp. 1094–1107, 2015.
 [19] C. A. C. Coello and E. MezuraMontes, “Handling constraints in genetic algorithms using dominancebased tournaments,” in Adaptive Computing in Design and Manufacture V. Springer, 2002, pp. 273–284.
 [20] F. Jiménez, A. F. GómezSkarmeta, and G. Sánchez, “How evolutionary multiobjective optimization can be used for goals and priorities based optimization,” in Primer Congreso Espanol de Algoritmos Evolutivos y Bioinspirados (AEB02). Mérida, Espana, Universidad de Extremadura, 2002, pp. 460–465.
 [21] S. Kukkonen and J. Lampinen, “Constrained realparameter optimization with generalized differential evolution,” in Proceedings of 2006 IEEE Congress on Evolutionary Computation. IEEE, 2006, pp. 207–214.
 [22] W. Gong and Z. Cai, “A multiobjective differential evolution algorithm for constrained optimization,” in Proceedings of IEEE Congress on Evolutionary Computation. IEEE, 2008, pp. 181–188.
 [23] T. Ray, T. Kang, and S. K. Chye, “An evolutionary algorithm for constrained optimization,” in Proceedings of 2000 Genetic and Evolutionary Computation Conference. San Francisco: Morgan Kaufmann, 2000, pp. 771–777.
 [24] A. H. Aguirre, S. B. Rionda, C. A. Coello Coello, G. L. Lizárraga, and E. M. Montes, “Handling constraints using multiobjective optimization concepts,” International Journal for Numerical Methods in Engineering, vol. 59, no. 15, pp. 1989–2017.

[25]
J. J. Liang and P. Suganthan, “Dynamic multiswarm particle swarm optimizer with a novel constrainthandling mechanism,” in
Proceedings of 2006 IEEE Congress on Evolutionary Computation. IEEE, 2006, pp. 9–16.  [26] S. Watanabe and K. Sakakibara, “Multiobjective approaches in a singleobjective optimization environment,” in Proceedings of 2005 IEEE Congress on Evolutionary Computation, vol. 2. IEEE, 2005, pp. 1714–1721.
 [27] Y. Wang and Z. Cai, “Combining multiobjective optimization with differential evolution to solve constrained optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 1, pp. 117–134, 2012.

[28]
R. Storn and K. Price, “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,”
Journal of global optimization, vol. 11, no. 4, pp. 341–359, 1997.  [29] S. Das and P. Suganthan, “Differential evolution: A survey of the stateoftheart,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 1, pp. 4–31, Feb 2011.
 [30] J. He, B. Mitavskiy, and Y. Zhou, “A theoretical assessment of solution quality in evolutionary algorithms for the knapsack problem,” in Proceedings of 2014 IEEE Congress on Evolutionary Computation. IEEE, 2014, pp. 141–148.
 [31] K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2, pp. 311–338, 2000.
 [32] J. Liang, T. P. Runarsson, E. MezuraMontes, M. Clerc, P. Suganthan, C. C. Coello, and K. Deb, “Problem definitions and evaluation criteria for the cec 2006 special session on constrained realparameter optimization,” Nanyang Technological University, Tech. Rep., 2006. [Online]. Available: http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/SharedDocuments/Forms/AllItems.aspx
Comments
There are no comments yet.