A binary differential evolution algorithm learning from explored solutions

01/06/2014 ∙ by Yu Chen, et al. ∙ 0

Although real-coded differential evolution (DE) algorithms can perform well on continuous optimization problems (CoOPs), it is still a challenging task to design an efficient binary-coded DE algorithm. Inspired by the learning mechanism of particle swarm optimization (PSO) algorithms, we propose a binary learning differential evolution (BLDE) algorithm that can efficiently locate the global optimal solutions by learning from the last population. Then, we theoretically prove the global convergence of BLDE, and compare it with some existing binary-coded evolutionary algorithms (EAs) via numerical experiments. Numerical results show that BLDE is competitive to the compared EAs, and meanwhile, further study is performed via the change curves of a renewal metric and a refinement metric to investigate why BLDE cannot outperform some compared EAs for several selected benchmark problems. Finally, we employ BLDE solving the unit commitment problem (UCP) in power systems to show its applicability in practical problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Background

Differential evolution (DE) Storn1997 , a competitive evolutionary algorithm emerging more than a decade ago, has been widely utilized in the science and engineering fields Price2005 ; Das2011 . The simple and straightforward evolving mechanisms of DE endow it with powerful capability of solving continuous optimization problems (CoOPs), however, hamper its applications on discrete optimization problems (DOPs).

To take full advantage of the superiority of mutations in classic DE algorithms, Pampará and Engelbrecht Pampara2006 introduced a trigonometric generating function to transform the real-coded individuals of DE into binary strings, and proposed an angle modulated differential evolution (AMDE) algorithm for DOPs. Compared with the binary differential evolution (BDE) algorithms that directly manipulate binary strings, AMDE was much slower but outperformed BDE algorithmsx with respect to accuracy of the obtained solutions Engelbrecht2007 . Meanwhile, Gong and Tuson proposed a binary DE algorithm by forma analysis Gong2007 , but it cannot perform well on binary constraint satisfaction problems due to its weak exploration ability Yang2008 . Trying to simulate the operation mode of the continuous DE mutation, Kashan et al. Kashan2013 design a dissimilarity based differential evolution (DisDE) algorithm incorporating a measure of dissimilarity in mutation. Numerical results show that DisDE is competitive to some existing binary-coded evolutionary algorithms (EAs).

Moreover, the performances of BDE algorithms can also be improved by incorporating recombination operators of other EAs. Hota and Pat Hota2010 proposed an adaptive quantum-inspired differential evolution algorithm (AQDE) applying quantum computing techniques, while He and Han He2007 introduced the negative selection in artificial immune systems to obtain an artificial immune system based differential evolution (AIS-DE) algorithm. With respect to the fact that the logical operations introduced in AIS-DE tends to produce “1” bits with increasing probability, Wu and Tseng Wu2010 proposed an modified binary differential evolution strategy to improve the performance of BDE algorithms on topology optimization of structures.

1.2 Motivation and Contribution

Existing researches tried to incorporate the recombination strategies of various EAs to get efficient BDEs for DOPs, whereas there are still some points to be improved:

  • AMDE Pampara2006 has to transform real values to binary strings, which leads to the explosion of computation cost for function evaluations. Meanwhile, the mathematical properties of the transformation function can also influence its performances on various DOPs;

  • BDE algorithms directly manipulating bit-strings, such as binDE Gong2007 , AIS-DE He2007 and MBDE Wu2010 , etc., cannot effectively imitate the mutation mechanism of continuous DE algorithms. Thus, they cannot perform well on high-dimensional DOPs due to their weak exploration abilities;

  • DisDE Kashan2013 , which incorporates a dissimilarity metric in the mutation operator, has to solve a minimization problem during the mutation process. As a consequence, the computation complexity of DisDE is considerably high.

Generally, it is a challenging task to design an efficient BDE algorithm perfectly addressing the aforementioned points. Recently, variants of the particle swarm optimization (PSO) algorithm Kennedy1995 have been successfully utilized in real applications Eberhart2001 ; Banks2007 ; Poli2007 ; Banks2008 ; Kennedy2010 . Although DE algorithms perform better than PSO algorithms in some real world applications Vesterstrom2004 ; Rekanos2008 ; Ponsich2011 , it is still promising to improve DE by incorporating PSO in the evolving process Das2005 ; Moore2006 ; Omran2009 . Considering that the learning mechanism of PSO can accelerate the convergence of populations, we propose a hybrid binary-coded evolutionary algorithm learning from the last population, named as the binary learning differential evolution (BLDE) algorithm. In BLDE, the searching process of population is guided by the renewed information of individuals, the dissimilarity between individuals and the best explored solution in the population. By this means, BLDE can performance well on DOPs.

The remainder of the paper is structured as follows. Section 2 presents a description of BLDE, and its global convergence is theoretically proved in Section 3. Then, in Section 4 BLDE is compared with some existing algorithms by numerical results. To test performance of BLDE on real-life problems, we employ it to solve the unit commitment problem (UCP) in Section 5. Finally, discussions and conclusions are presented in Section 6.

2 The binary learning differential evolution algorithm

2.1 Framework of the binary learning differential evolution algorithm

1:  Randomly generate two populations and of individuals; Set ;
2:  while the stop criterion is no satisfied do
3:     Let ;
4:     for all  do
5:        Randomly select and from , as well as from ;
6:        ;
7:        for   do
8:           if  then
9:              if  then
10:                 ;
11:              else
12:                 if  then
13:                    
14:                 end if
15:              end if
16:           end if
17:        end for
18:        if  then
19:           ;
20:        end if
21:     end for
22:     ;
23:     ;
24:  end while
Algorithm 1 The binary learning differential evolution (BLDE) algorithm

For a binary maximization problem (BOP) 111When a CoOP is considered, the real-value variables can be coded as bit-strings, and consequently, a binary optimization problem is constructed to be solved by binary-coded evolutionary algorithms.

(1)

the BLDE algorithm illustrated by Algorithm 1 possesses two collections of solutions, the population and the archive . At the first generation, the population and the archive are generated randomly. Then, repeat the following operations until the stopping criterion is satisfied.

For each individual a trial solution is generated by three randomly selected individuals and . At first, initialize the trial individual as the winner of two individuals and . , if and coincide on the bit, the bit of is changed as follows.

  • If the bit of differs from that of , is set to be , the bit of ;

  • otherwise, is randomly mutated with a preset probability .

Then, replace with if . After the update of population is completed, set and .

2.2 The positive functions of the learning scheme

Generally speaking, the trial solution is generated by three randomly selected individuals. Meanwhile, it also incorporates conditional learning strategies in the mutation process.

  • By randomly selecting , BLDE can learn from any member in the present population. Because the elitism strategy is employed in the BLDE algorithm, BLDE could learn from any pbest solution in the population, unlike that particles in PSO can only learn from their own pbest individuals.

  • By randomly selecting , BLDE can learn from any member in the last population. At the early stage of the iteration process, individuals in the population are usually different with those in . Combined with the first strategy, this scheme actually enhances the exploration ability of the population, and to some extent, accelerates convergence of the population.

  • When bits of coincide with the corresponding bits of , trial solutions learn from the gbest on condition that randomly selected differs from on the these bits. This scheme imitates the learning strategy of PSO, and meanwhile, can also prevent the population from being governed by dominating patterns, because the increase of probability will lead to the random mutation performed on , preventing the duplicate of the dominating patterns in the population.

In PSO algorithms, each particle learns from the pbest (the best solution it has obtained so far) and the gbest (the best solution the swarm has obtained so far), and particles in the swarm only exchange information via the gbest solution. The simple and unconditional learning strategy of PSO usually results in its fast convergence rate, however, sometimes leads to its premature convergence to local optima. The BLDE algorithm learning from as well as can explore the feasible region in a better way, and meanwhile, by conditionally learning from it will not be attracted by local optimal solutions.

3 Convergence analysis of BLDE

Denote to be an optimal solution of BOP (1), the global convergence of BLDE can be defined as follows.

Definition 1

Let be the population sequence of BLDE. It is said to converge in probability to the optimal solution of BOP (1), if it holds that

To confirm the global convergence of the proposed BLDE algorithm, we first show that any feasible solution can be generated with a positive probability.

Lemma 1

In two generations, BLDE can generate any feasible solution of BOP (1) with a probability greater than or equal to a positive constant .

Proof: Denote and to be the individuals of and , respectively. Let be the trial individual generated at the generation. There are two different cases to be investigated.

  1. If and include at least one common individual, the probability is greater than or equal to , where and are selected randomly from and , respectively. Then, the random mutation illustrated by Lines 12 - 14 of Algorithm 1 will be activated with probability , which is the minimum probability of selecting to be , the best individual in the present population . For this case, both and are greater than or equal to . Then, any feasible solution can be generated with a positive probability greater than or equal to .

  2. If all individuals in differ from those in , two different solutions and are located at the same index with probability

    Since , is not empty. Moreover, the elitism update strategy ensure that the trial individual is initialized to be . Then,

    and , will keep unchanged with a probability greater than , the probability of selecting and not activating the mutation illustrated by Lines 12-14 of Algorithm 1. That is to say, the probability of generating a trial individual is greater than or equal to .

    For this case, the individual of the population will keep unchanged at the generation, and at the next generation (generation ), will coincide with . Then, it comes to the first case, and consequently, the trial individual can reach any feasible solution with a positive probability greater than or equal to . For this case, any feasible solution can be generated with a probability greater than .

In conclusion, in two generations the trial individual will reach any feasible solution with a probability greater than or equal to a positive constant , where .    

Theorem 1

BLDE converges in probability to the optimal solution of OP (1).

Proof: Lemma 1 shows that there exists a positive number such that

Denoting

we know that

Thus,

If is even,

otherwise,

In conclusion, BLDE converges in probability to the optimal solution of BOP (1).    

4 Numerical experiments

Although Theorem 1 validates the global convergence of the BLDE algorithm, its convergence characteristics have not been investigated. In this section, we try to show its competitiveness to existing algorithms by numerical experiments.

4.1 Benchmark problems

Tab. 1 illustrates the selected benchmark problems, properties and settings of which are listed in Tab. 2. As for the continuous problems , all real variables are coded by bit-strings. For the multiple knapsack problem (MKP) , we test BLDE via five test instances characterized by data files “weing6.dat, sent02.dat, weish14.dat, weish22.dat and weish30.dat” Website , termed as , , , and , respectively. When a candidate solution is evaluated, it is penalized by Uyar2005 .

Problems Descriptions.
:
: Horn1994
:
:
:
:
:
Table 1: Descriptions of the selected benchmark problems.
Problem Binary/Real Dimension Bit-length Constraints Maximum Objective Value
Binary 30 30 - 30
Binary 29 29 - 49992
Real 30 180 - 0
Real 30 480 - 0
Real 30 240 - 0
Real 30 300 - 0
Real 30 300 - 0
Binary 28 28 2 130623
Binary 60 60 30 8722
Binary 60 60 5 6954
Binary 80 80 5 8947
Binary 90 90 5 11191
Table 2: Properties and settings of the benchmark problems

4.2 Parameter settings

For numerical comparisons, BLDE is compared with the angle modulated particle swarm optimization (AMPSO) Pampara2005 , the angle modulated differential evolution (AMDE) Pampara2006 , the dissimilarity artificial bee colony (DisABC) algorithm Kashan2012 , the binary particle swarm optimization (BPSO) algorithm Kennedy1997 , the binary differential evolution (binDE) Gong2007 algorithm and the self-adaptive quantum-inspired differential evolution (AQDE) algorithmHota2010 . As is suggested by the designers of the algorithms, the parameters of AMPSO, AMDE, DisABC, BPSO, binDE, and AQDE, are listed in Table 3. Prerun for BLDE shows that when the mutation ability is less than 0.05, its weak exploration ability leads to its premature to the local optima of multi-modal problems; while when is greater than , it cannot efficiently exploit the local region of global optima. Thus, in this paper we set to keep a balance between exploration and exploitation. All compared algorithms are tested with a population of size 50, and the results are compared after FEs, except that numerical results are compared after function evaluations (FEs) for MKPs, where is the bitstring length, is the number of constraints for MKP.

Algorithm Parameter settings
AMPSO
AMDE
DisABC
BPSO .
binDE .
AQDE , , .
BLDE .
Table 3: Parameter settings for the tested algorithms

4.3 Numerical comparisons

Implemented by the MATLAB package, the compared algorithms are run on a PC with a INTEL(R) CORE(R) CPU, running at 2.8GHZ with 4 GB RAM. After 50 independent runs for each problem, the results are compared in Tab. 4

via the average best fitness (AveFit), the standard deviation of best fitness (StdDev), the success rate (SR) and the expected runtime (RunTime). Taking AveFit and StdDev as the sorting indexes, the overall ranks of the compared algorithms are list in Tab.

5.

Problem AMPSO AMDE DisABC BPSO binDE AQDE BLDE
AveFit StdDev AveFit StdDev AveFit StdDev AveFit StdDev AveFit StdDev AveFit StdDev AveFit StdDev
(SR,Runtime) (SR,Runtime) (SR,Runtime) (SR,Runtime) (SR,Runtime) (SR,Runtime) (SR,Runtime)
3.00E+010.00E+00 3.00E+010.00E+00 3.00E+010.00E+00 3.00E+010.00E+00 2.94E+013.14E-01 2.34E+012.88E+00 3.00E+010.00E+00
(100, 3.01E-01) (100, 2.78E-01) (100, 1.60E+01) (100, 2.95E-01) (96, 4.07E-01) (4, 2.44E-01) (100, 2.15E-01)
5.0E+040.00E+00 5.0E+041.54E+02 4.53E+047.19E+03 3.96E+041.65E+04 4.52E+048.92E+03 3.46E+041.37E+04 5.00E+046.09E+01
(100, 2.34E+02) (88, 2.03E+02) (34, 2.92E+02) (66, 2.81E+02) (40, 2.79E+02) (16, 2.96E+02) (96, 3.07E+02)
-8.92E+002.15E+00 -5.48E+003.21E+00 -6.88E+002.86E-01 -4.88E+007.39E-01 -6.34E+003.04E-01 -6.55E+003.68E-01 -3.22E+008.74E-01
(0, 3.45E+02) (2, 3.47E+02) (0, 3.75E+02) (0, 3.53E+02) (0, 3.53E+02) (0, 3.55E+02) (0, 3.53E+02)
-4.55E+013.53E+01 -1.12E+011.99E+01 -5.70E+015.48E+00 -6.18E+002.40E+00 -4.07E+014.27E+00 -1.57E+013.79E+00 -1.12E+001.10E-01
(0, 1.03E+03) (48, 1.04E+03) (0, 1.21E+03) (0, 1.06E+03) (0, 1.04E+03) (0, 1.05E+03) (0, 1.05E+03)
-1.13E+011.15E+01 -1.27E+003.67E+00 -3.11E+015.55E+00 -1.90E-028.20E-03 -2.35E+013.91E+00 -2.32E+014.37E+00 -5.79E-022.24E-02
(22, 4.72E+02) (22, 4.76E+02) (0, 5.21E+02) (10, 4.82E+02) (0, 4.83E+02) (0, 4.85E+02) (0, 4.84E+02)
-2.94E+039.26E+02 -1.18E+023.51E+02 -4.23E+034.05E+02 -5.54E+022.82E+02 -3.58E+032.92E+02 -2.02E+034.17E+02 -4.55E+019.68E+01
(0, 6.37E+02) (8, 6.41E+02) (0, 7.00E+02) (0, 6.49E+02) (0, 6.48E+02) (0, 6.51E+02) (0, 6.45E+02)
-7.87E+003.29E+00 -4.57E+002.84E+00 -1.10E+013.19E-01 -1.67E+005.40E-03 -1.06E+012.74E-01 -1.00E+016.43E-01 -1.93E+003.84E-02
(0, 6.02E+02) (0, 6.08E+02) (0, 6.72E+02) (0, 6.22E+02) (0, 6.20E+02) (0, 6.19E+02) (0, 6.20E+02)
1.21E+054.61E+03 1.23E+052.70E+03 1.28E+051.14E+03 1.29E+052.99E+03 1.30E+052.04E+02 1.30E+052.89E+02 1.28E+052.66E+03
(0, 7.35E-01) (0, 6.85E-01) (2, 3.28E+00) (18, 9.82E-01) (52, 1.25E+00) (20, 9.39E-01) (10, 8.97E-01)
7.62E+034.80E+02 8.02E+031.19E+02 8.49E+034.21E+01 8.66E+033.56E+01 8.72E+034.45E+00 8.70E+031.47E+01 8.70E+031.62E+01
(0, 2.71E+01) (0, 2.61E+01) (0, 1.20E+02) (0, 3.65E+01) (84, 4.41E+01) (4, 3.43E+01) (4, 3.25E+01)
5.30E+032.12E+02 5.24E+031.83E+02 6.01E+031.19E+01 6.87E+037.85E+01 6.95E+030.00E+00 6.84E+037.11E+01 6.93E+033.66E+01
(0, 4.29E+00) (0, 4.13E+00) (0, 1.92E+01) (26, 5.88E+00) (100, 7.16E+00) (2, 5.51E+00) (58, 5.23E+00)
6.52E+034.14E+02 6.43E+032.22E+02 7.19E+031.89E+02 8.81E+031.02E+02 8.71E+031.06E+02 8.70E+039.21E+01 8.87E+035.43E+01
(0, 6.04E+00) (0, 5.90E+00) (0, 2.75E+01) (8, 8.31E+00) (0, 1.01E+01) (0, 7.73E+00) (4, 7.28E+00)
8.10E+035.96E+02 8.37E+032.87E+02 9.33E+032.29E+02 1.11E+044.40E+01 1.09E+047.01E+01 1.10E+048.22E+01 1.12E+041.86E+01
(0, 7.09E+00) (0, 6.91E+00) (0, 3.28E+01) (2, 9.64E+00) (0, 1.17E+01) (0, 8.87E+00) (6, 8.29E+00)
Table 4: Numerical results of AMPSO, AMDE,DisABC and BLDE on the 12 test problems. The best results for each problem are highlighted by boldface type.
Problem. AMPSO AMDE DisABC BPSO binDE AQDE BLDE
1 1 1 1 6 7 1
1 3 4 6 5 7 2
7 3 6 2 4 5 1
5 2 6 7 4 3 1
4 3 7 1 6 5 2
5 2 7 3 6 4 1
7 3 6 1 5 4 2
7 6 5 3 1 2 4
7 6 5 4 1 2 3
6 7 5 3 1 4 2
6 7 5 2 3 4 1
7 6 5 2 4 3 1
5.3 4.1 5.2 2.9 3.8 4.2 1.8
Table 5: Ranks on the performances of the compared algorithms for the selected benchmark problems.

Numerical results in Tab. 4 show that BLDE is generally competitive to the compared algorithms for the selected benchmark problems, which is also illustrated by Tab. 5, where BLDE averagely ranks first for the benchmark problems. Meanwhile, because it contains no time-consuming operations, for most cases BLDE spends less CPU time for the selected benchmark problems. Considering that AveFit and StdDev are two overall statistical indexes of the numerical results, we also perform a Wilcoxon rank sum test Gibbons2003 with a significance level of 0.05 to compare performances of the tested algorithms, and the results are listed in Tab. 6.

Algorithm HBPD Algorithm HBPD
AMPSO AMDE
DisABC BPSO
binDE AQDE
Table 6: Wilcoxon rank sum tests of the compared algorithms on the benchmark problems. The notation means the algorithm for comparison is significantly superior to (inferior to) BLDE with significance level 0.05; means the compared algorithm is not significantly different with BLDE.

The results of Wilcxon rank sum tests demonstrate that BPSO performs significantly better on and , the nosiy quadric problem and the maximization problem of Ackley’s function, respectively. Because BPSO imitates the evolving mechanisms of PSO by simultaneously changing all bits of the individuals, it can quickly converge to the global optimal solutions. However, BLDE sometimes mutates bit by bit, and consequently, its evolving process is more vulnerable to be influenced by noises and the multimodal landscapes of benchmark problems. Thus, BPSO also performs better than BLDE on and . For similar reasons, BPSO outperform BLDE on , a low-dimensional MKP.

Meanwhile, binDE obtains better results than BLDE on the low-dimensional MKPs , but performs worse than BLDE on the other problems, which is attributed to the fact that the exploitation ability of binDE descend with the expansion of the searching space. Consequently, binDE cannot perform well on the high-dimensional problems. Similarly, AQDE, which is specially designed for Knapsack problems, only outperforms BLDE for the low-dimensional MKP , and cannot perform better than BLDE for other selected benchmark problems.

4.4 Further comparison on the exploration and exploitation abilities

To further explore the underlying causes resulting in BLDE performing worse than the BPSO, binDE and AQDE on several given test problems, we try to investigate how their exploration and exploitation abilities change during the evolving process. Thus, a renewal metric and a refinement metric are defined to respectively quantify the exploration and exploitation abilities.

Definition 2

Denote the population of an EA at the generation to be , which consists of n-bit individuals. Let

to be the Hamming distance between two binary vectors

and . The renewal metric of an EA at the generation is defined as

(2)

where is the individual in , and is the corresponding candidate solution. The refinement metric of an EA at the generation is defined as

(3)

where is the best explored solution before the generation.

The Hamming distance between and the corresponding trial vector denotes the the overall changes that is performed on the bit-string by the variation strategies. Accordingly, the average value over the whole population can indicate the overal changes of the population. Then, properly reveals the exploration abilities of EAs at generation . Meanwhile, an EA which harbors a big value of can intensely exploit the local region around the best explored solution , and thus, it harbors powerful exploitation ability.

For the comparison, we illustrate the changing curves of the renewal metric and the refinement metric for BLDE, BPSO, AQDE and binDE by Figure 1. Fig.1(a) and Fig.1(b) show that when BPSO is employed to solve and , the renewal metric quickly descend to about zero, and the refinement metric ascend to a high level, which demonstrates that the population of PSO quickly converges. Meanwhile, the diversity of the population rapidly descend to a low level, and the population focuses on local search around the obtained best solution. Since the intensity of noise in is small, the convergence of BPSO is not significantly influenced. For , the massive local optimal solutions are regularly distributed in the feasible region, BPSO can also quickly locate the global optimal solution. However, BLDE tries to keep a balance between exploration and exploitation, and the bit-by-bit variation strategies make it more vulnerable to be frustrated by the noise of as well as the multi-modal landscape of . As a consequence, BPSO performs better than BLDE on and .

However, the local optimal solutions of MKPs are not regularly distributed. Thus, to efficiently explore the feasible regions, it is vital to keep a balance between exploration and exploitation. Figs. 1(c), 1(d), 1(e) and 1(f) demonstrate binDE and AQDE can keep a balance between exploration and exploitation for the compared algorithms. Thus, AQDE performs better than BLDE on the test problem , and binDE performs better than BLDE on , and .

(a) : BLDE vs. BPSO;
(b) : BLDE vs. BPSO;
(c) : BLDE vs. AQDE;
(d) : BLDE vs. binDE;
(e) : BLDE vs. binDE;
(f) : BLDE vs binDE.
Figure 1: Comparisons of the renewal and refinement metrics for test problems , , , , .

5 Performance of BLDE on the unit commitment problem

In this section, we employ BLDE solving the unit commitment problem (UCP) in power systems. To minimize the production cost over a daily to weekly time horizon, UCP involves the optimum scheduling of power generating units as well as the determination of the optimum amounts of power to be generated by committed units222To compare with the work reported in Datta2012 , we employ similar notations and descriptions in this section. Datta2012 . Thus, UCP is a mixed integer optimization problem, the decision variables of which are the binary string representing the on/off statuses of units and the real variables indicating the generated power of units.

5.1 Objective function of UCP

The objective of UCP is to minimize the total production cost

(4)

where is the number of units to be scheduled and T is the time horizon. When the unit is committed to generate power at time

, the binary variable

is set to be 1; otherwise, . The function represents the fuel cost of unit at time , which is frequently approximated by

(5)

where , and are known coefficients of unit . If the unit has been off prior to start-up, there is a start-off cost

(6)

where , , and are the hot start cost, cold start cost, cold start time and minimum down time of unit , respectively. , the continuously off time of unit , is determined by

(7)

where is the initial status of unit , which shows for how long the unit was on/off prior to the start of the time horizon.

5.2 Constraints in UCP

The minimization of the total production cost is subject to the following constraints.

Power balance constraints:

The total generated at time must meet the power demand at that time instant, i.e.,

(8)

where is the power demand at time . Practically, it is hardly possible to exactly meet the power demand, an error is allowed for the generated power, i.e.,

(9)
Spinning reserve constraints:

Due to possible outages of equipments, it is necessary for power systems to satisfy the spinning reserve constraints. Thus, the sum of the maximum power generating capacities of all committed units should be greater than or equal to the power demand plus the minimum spinning reserve requirement, i.e.,

(10)

where is the maximum power generating capacity of unit , and is the minimum spinning reserve requirement at time .

Minimum up time constraints:

If unit is on at time and switched off at time , the continuous up time should be greater than or equal to the minimum up time of unit , i.e.,

(11)

where the continuously up time is

(12)
Minimum down time constraints:

If unit is off at time and switched on at time , the continuous up time should be greater than or equal to the minimum off time of unit , i.e.,

(13)
Range of generated power:

The generated power of a unit is limited in an interval, i.e.,

(14)

where and is the minimum power output and the maximum power output of unit , respectively.

5.3 Implement of BLDE for UCP

The optimal commitment of power units in UCP is obtained by combining BLDE with real-coded DE operations. In BLDE, each binary individual represents an on/off scheduling plan of units, accompanied with a real-coded individual representing the specific power outputs of units. When the binary individuals are recombined during the iteration process, the real-coded individuals are recombined via the DE/rand/1 mutation and binary crossover strategies of the real-coded DE. Then, binary individuals and the corresponding real individuals are integrated together for evaluation. If the combined mixed-integer individuals violate the constraints in UCP, they are repaired via the repairing mechanisms proposed in Datta2012 .

The performance of BLDE is tested via a 10-unit power system, the parameters and forecasted power demands of which are respectively listed in Tab. 7 and Tab. 8. To fairly compare BLDE with the method proposed in Datta2012 , we also set the population size to be 100, and the results are compared after 30 independent runs of 2500 iterations, where the scalar factor is set to be 0.8. The statistical results are listed in Tab. 9.

Unit() (h)
1 455 150 1000 16.19 0.00048 4500 9000 5 8 8 8
2 455 150 970 17.26 0.00031 5000 10000 5 8 8 8
3 130 20 700 16.60 0.00200 550 1100 4 5 5 -5
4 130 20 680 16.50 0.00211 560 1120 4 5 5 -5
5 162 25 450 19.70 0.00398 900 1800 4 6 6 -6
6 80 20 370 22.26 0.00712 170 340 2 3 3 -3
7 85 25 480 27.74 0.00079 260 520 2 3 3 -3
8 55 10 660 25.92 0.00413 30 60 0 1 1 -1
9 55 10 665 27.27 0.00222 30 60 0 1 1 -1
10 55 10 670 27.79 0.00173 30 60 0 1 1 -1
Table 7: Unit parameters for the 10-unit power system.
Hour 1 2 3 4 5 6 7 8 9 10 11 12
Demand (MW) 700 700 850 950 1000 1100 1150 1200 1300 1400 1450 1500
Hour 13 14 15 16 17 18 19 20 21 22 23 24
Demand (MW) 1400 1300 1200 1050 1000 1100 1200 1400 1300 1100 900 800
Table 8: Forecasted power demands for the 10-unit system over 14-h time horizon.
Method Power balance error Best cost Average Cost Worst Cost Standard deviation
BRCDE 0.0% 563938 - - -
0.1% 563446 563514 563563 30
0.5% 561876 - - -
1% 559357 - - -
BLDE 0.0% 563977 564005 564088 24
0.1% 563552 563636 563745 49
0.5% 561677 561847 - 50
1% 559155 559207 559426 48
Table 9: Results comparison between BLDE and BRCDEDatta2012 for the 10-unit power system.“-” means that the corresponding item was not presented in the literature.

The comparison results show that when the power balance error is small, performance of BLDE is a bit worse than that of the binary-real-coded differential evolution (BRCDE) algorithm proposed in Datta2012 . However, when the power balance is relaxed to a relatively great extent, BLDE outperform BRDE for UCP of the 10-unit power system. The reason could be that crossover operation for real variables is not appropriately regulated for UCP, and accordingly, simultaneous variations on all real variables usually lead to violations of constraints. Thus, BLDE can only outperforms BRCDE when the constraints are relaxed greatly.

6 Discussions

In this paper, we propose a BLDE algorithm appropriately incorporating the mutation strategy of binary DE and the learning mechanism of binary PSO. For majority of the selected benchmark problems, BLDE can outperform the compared algorithms, which indicate that BLDE is competitive to the compared algorithms. However, statistical test results show that BPSO performs better than BLDE on and , AQDE is more efficient for , and binDE obtains better results on , as well as . When generating a candidate solution, BLDE first initiate it as the winner of two obtained solutions, and then, regulate it by learning from the best individual in the population. This strategy simultaneously incorporates the synchronously changing strategy and the bitwise mutation strategy of candidate generation. Thus, BLDE can performs well on most of the high-dimensional benchmark problems. However, when BLDE is employed to solve and , the global optimal solutions of which are easy to be locate, it performs worse than BPSO; meanwhile, when it is implemented to solve the low-dimensional problems , and , the local optimal solutions of which are irregularly distributed in the feasible regions, it cannot perform better than binDE.

7 Conclusions

Generally, the proposed BLDE is competitive to the existing binary evolutionary algorithms. However, its performance can been improved. Thus, future work will focus on designing an adaptive strategy appropriately managing the synchronously changing strategy and the bitwise mutation strategy employed in BLDE. Meanwhile, we will try to further improve its performances on mixed-integer optimization problems by efficiently incorporate it with real-coded recombination strategies.

Acknowledgements

This work was partially supported by the Natural Science Foundation of China under Grants 51039005, 61173060 and 61303028, as well as the Fundamental Research Funds for the Central Universities (WUT: 2013-Ia-001).

References

  • (1) Banks A., Vincent J. and Anyakoha C., A review of particle swarm optimization, Part I: background and development. Natural Computing, 6(4): 467-484, 2007.
  • (2) Banks A., Vincent J. and Anyakoha C., A review of particle swarm optimization, Part II: hybridisation, combinatorial, multicriterial and constrained optimization, and indicative applications. Natural Computing, 7(1): 109-124, 2008.
  • (3) Das S., Konar A. and Chakraborty U. K., Improving particle swarm optimization with differentially perturbed velocity. In

    Proc. of 2005 Conference on Genetic and Evolutionary Computation (GECCO’05)

    , ACM, 2005, pp. 177-184.
  • (4) Das S. and Suganthan P. N., Differential evolution: a survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation, 15(1):4-31, 2011.
  • (5) Datta D. and Dutta S., A binary-real-coded differential evolution for unit commitment problem. Electrical Power and Energy Systems, 42(1): 517-524, 2012.
  • (6) Eberhart R. C. and Shi Y., Particle swarm optimizaiton: developments, applications and resources. In Proc. of 2001 IEEE Conference on Evolutionary Computation (CEC 2001), IEEE, 2001, pp.81-86.
  • (7) Engelbrecht A. P. and Pampará, Binary differential evolution strategies. In Proc. of 2007 IEEE Congress on Evolutionary Computation, IEEE, 2007, pp.1942-1947.
  • (8) Gibbons, J. and Chakraborti, S., Nonparametric Statistical Inference (the Fifth Edition). Taylor and Francis, 2011.
  • (9) Gong, T. and Tuson, A.L., Differential evolution for binary encoding. In Soft Computing in Industrial Applications, Springer, 2007, pp. 251-262.
  • (10) He X. and Han L., A novel binary differential evolution algorithm based on artificial immune system. In Proc. of 2007 IEEE Congress on Evolutionary Computation, IEEE, 2007, pp. 2267-2272.
  • (11) Horn J., Goldber D. E. and Deb K., Long path problems. In Proc. of 1994 International Conference on Parallel Problem Solving from Nature (PPSN III), Springer, 1994, pp. 149-158.
  • (12) Hota A. R. and Pat, A., An adaptive quantum-inspired differential evolution algorithm for 0-1 knapsack problem. In Proc. of 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC), IEEE, 2010, pp.703-708.
  • (13) Kashan, M.H., Nahavandi, N., & Kashan, A.H. (2012). DisABC: A new artificial bee colony algorithm for binary optimization. Applied Soft Computing, 12(1): 342-352, 2012.
  • (14) Kashan M. H., Kashan A. H. and Nahavandi N., A novel differential evolution algorithm for binary optimizatoin. Computational Optimization and Applications, 55(2): 481-513, 2013.
  • (15) Kennedy, J., & Eberhart, R.C. (1995). Particle swarm optimization. In

    Proc. of 1995 IEEE International Conference on Neural Networks

    , IEEE, 1995, pp. 1942-1948.
  • (16) Kennedy, J., & Eberhart, R.C. (1997). A discrete binary version of the particle swarm algorithm. In Proc. of IEEE International Conference on Systems, Man, and Cybernetics, IEEE, 1997, pp. 4104-4108.
  • (17) Kennedy J., Particle swarm optimizaiton. In

    Encyclopedia of Machine Learning

    , Springer, 2010, pp.760-766.
  • (18) Moore P. W. and Venayagamoorthy G. K., Evolving digital circuit using hybrid particle swarm optimizaiton and differential evolution. Interational Journal of Neural Systems, 16(3): 163-177, 2006.
  • (19) Omran M. G. H., Engelbrecht A. P. and Salman A., Bare bones differential evolution. European Journal of Operational Research, 196(1): 128-139, 2009.
  • (20) Pampara, G., Franken, N.,& Engelbrecht, A.P., Combining particle swarm optimisation with angle modulation to solve binary problems. In Proc. of 2005 IEEE Congress on Evolutionary Computation, IEEE, 2005, pp. 89-96.
  • (21) Pampara G., Engelbrecht A. P. and Franken, N., Binary differential evolution. In Proc. of 2006 IEEE Congress on Evolutionary Computation, IEEE, 2006, pp. 1873-1879.
  • (22) Ponsich A. and Coello Coello C. A., Differential evolution performances for the solution of mixed-integer constraned process engineering problems. Appliced Soft Computing, 11: 399-409, 2011.
  • (23) Poli, R., Kennedy, J., and Blackwell, T., Particle swarm optimization: An Overview. Swarm Intelligence, 1: 33-57, 2007.
  • (24) Price, K., Storn, R., and Lampinen, J., Differential Evolution: A Practical Approach to Global Optimization. Springer, 2005.
  • (25) Rekanos I. T., Shape reconstruction of a perfectly conducting scatterer using differential evolution and particle swarm optimization. IEEE Transactions on Geoscience and Remote Sensing, 46(7): 1967-1974, 2008.
  • (26) Storn, R. and Price, K. (1997). Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces. Journal of Global Optimization, 11: 341-359, 1997.
  • (27) Uyar, Ş., & Eryiğit, G., Improvements to penalty-based evolutionary algorithms for the multi-dimensional knapsack problem using a gene-based adaptive mutation approach. In Proc. of 2005 Conference on Genetic and evolutionary computation (GECCO ’05), ACM, 2005, pp. 1257-1264.
  • (28) Vesterstrom J. and Thomsen R., A comparative study of differential evolution, particle swarm optimization and evolutionary algorithms on numerical benchmark problems. In Proc. of 2004 IEEE Conference on Evolutionary COmputation (CEC’04), IEEE, 2004, pp. 1980-1987.
  • (29) Wang L., Fu X. P., Mao Y. F., Menhas M. I. and Fei M. R., A novel modified binary differential evolution algorithm and its applications. Neurocomputing, 98: 55-75, 2012.
  • (30) Wu C-Y. and Tseng K-Y., Topology optimization of structures using binary differential evolution. Structural and Multidisciplinary Optimization, 42: 939-953, 2010.
  • (31) http://www.zib.de/index.php?id=921&no_cache=1&L=0&cHash=fbd4ff9555f8714ac6238261e3963432&type=98
  • (32) Yang Q., A comparative study of discrete differential evolution on binary constraint satisfaction problems. In Proc. of 2008 IEEE Congress on Evolutionary Computation. IEEE, 2008, pp. 330-335.