1 Introduction
For optimization of computationally hard problems and of problems that are mathematically intractable, machinelearningbased strategies such as evolutionary computation (EC)
[14]and artificial neural network (ANN)
[8] have seen significant success in numerous application areas. The “nofreelunch theorem” [25]tells us that, theoretically, over all possible optimization functions, all algorithms perform equally well. In practice, however, for specific problems (particularly, hard problems), the need for better and still better algorithms (and heuristics) remains.
The Jaya algorithm [22], one of the newest members of the evolutionary computation family, has seen remarkable success across a wide variety of applications in continuous optimization (see Section 2 below). Jaya’s success can arguably be attributed to the following two features: (a) it requires very few algorithm parameters, and (b) compared to most of its ECcousins, Jaya is extremely simple to implement. A user of the Jaya algorithm has to decide on suitable values for only two parameters – population size and the number of iterations (generations). Because any populationbased algorithm (or heuristic) must have a population size, and because the user of any algorithm/heuristic must have an idea of when to stop the process, it can be argued that the population size and the stopping condition are two fundamental attributes of any populationbased heuristic and that the Jaya algorithm is parameterless. In this paper, we present an algorithm that improves over the Jaya algorithm by modifying the search strategy, without compromising on the above two qualities. The comparative performance of Jaya and the proposed method is studied empirically on a twelvefunction benchmark testsuite as well as on a realworld problem from fuel cell stack design optimization. The improvement in performance afforded by the proposed algorithm is validated with statistical tests of significance. (Technically, Jaya is not an algorithm; it is a heuristic. However, following common practice in the evolutionary computation community, we continue to refer to it as an algorithm in this paper.)
The remainder of this paper is organized as follows. A very brief outline of some of the most interesting previous work on the Jaya algorithm is presented in Section 2. Section 3 presents the proposed algorithm. Simulation results and statistical tests for performance analysis are presented in Section 4. Finally, conclusions are drawn in Section 5.
2 A brief overview of previous work on Jaya
A variation of the standard Jaya algorithm is presented in the multiteam perturbationguiding Jaya (MTPGJaya) [19] where several “teams” explore the search space, with the same population being used by each team, while the “perturbations” governing the progression of the teams are different. The MTPGJaya was applied to the layout optimization problem of a wind farm. The Jaya algorithm was originally designed for continuous (realvalued) optimization, and most of Jaya’s applications to date have been in the continuous domain. A binary version of Jaya, however, was proposed in [12], where the authors borrowed (from [18]
) the idea of combining particle swarm optimization with angle modulation and adapted that idea for Jaya. The binary Jaya was applied to feature selection in
[12]. Modifications to the standard Jaya algorithm include a selfadaptive multipopulationbased Jaya algorithm that was applied to entropy generation minimization of a platefin heat exchanger [21], a multiobjective Jaya algorithm that was applied to waterjet machining process optimization [20], and a hybrid parallel Jaya algorithm for a multicore environment [13]. Application areas of the Jaya algorithm have included such diverse fields as pathological brain detection systems [16], flowshop scheduling [2], maximum power point tracking problems in photovoltaic systems [9], identification and monitoring of electroencephalogrambased braincomputer interface for motor imagery tasks [23], and traffic signal control [7].3 The proposed algorithm
The new algorithm is presented in Algorithm 1 where, without loss of generality, an array representation with conventional indexed access is assumed for the members (individuals) of a population. At each generation, we examine the individuals in the population one by one, in sequence, conditionally replacing each with a newly created individual. A new individual is created from the current individual by using the best individual, the worst individual, and two random numbers – each chosen uniformly randomly in (0, 1] – per problem parameter (variable). The generation of the new individual , given the current individual , is described by the following equation (, , and are each a
component vector):
where , = 1 to , represent the parameters (variables) to be optimized, and are each a random number in (0.0, 1.0], indicates the iteration (generation) number, and represent, respectively, the best and the worst individual in the population at the time of the creation of from . When falls outside its problemspecified lower or upper bound, it is clamped at the appropriate bound.
In the original Jaya algorithm, the new individual replaces the current individual only if it (the former) is better than the latter. The present algorithm, however, accepts the new individual if it is at least as good as the current individual.
The original Jaya updates the populationbest and the populationworst individuals once every generation. Algorithm 1, however, checks to see if needs to be updated, and performs the update if needed, after every single replacement of the existing individual. A similar approach is adopted for updating , but in this case, an update is needed only for the case when the existing (current) individual is the worst one; this is because a replacement is guaranteed never to cause the objective (cost) function to be worse.
The simultaneous presence in the population of more than one best (or worst) individual (clones of the same individual and/or different genotypes with the same phenotype) presents no problem for the new algorithm, because the computation of the best (or worst) is always over the entire population, that is, it is never done incrementally.
We improve upon Jaya by changing the policies of updating the best and the worst members and also by changing the criterion used to accept a new member as a replacement of an existing member. The motivation for the first pair of changes comes from the argument that an early availability and use of the best and worst individuals should lead to an earlier creation of better individuals; this is similar to the idea behind the “steadystate” operation of genetic algorithms
[24, 6]. The logic behind the second change is to try to avoid the “plateau problem”.We call the proposed algorithm semisteadystate Jaya or SJaya.
4 Simulation results
For studying the comparative performance of Jaya and SJaya, we use a benchmark testsuite comprising a dozen wellknown test functions from the literature and a realworld problem of fuel cell stack design optimization. All of the thirteen problems involve minimization of the objective function value (fitness). The following metrics [3] are used for performance comparison:

Bestofrun fitness: the best (lowest), mean, and standard deviation (over 30 runs) of the bestofrun fitness values;

The number of fitness evaluations (FirstHitEvals) needed to reach a specified fitness value for the first time in a run: the best (fewest), mean, and standard deviation (over 30 runs) of these numbers;

Success count: The number of runs (out of the thirty) in which the specified fitness level is reached (it is possible that the specified level is never reached with the given population size and the given number of generations).
The bestofrun fitness provides a measure of the quality of the solution, while the FirstHitEvals metric expresses how fast the algorithm is able to find a solution of a given quality. The two metrics are thus complementary to each other.
4.1 Results on the benchmark testsuite
The benchmark suite (Table 1) includes functions of a wide variety of features and levels of problem difficulty, including unimodal/multimodal, separable/nonseparable, continuous/discontinuous, differentiable/nondifferentiable, and convex/nonconvex functions.
Name  Definition  Dim.  Global Minimum  Bounds  

Ackley  30 


Rosenbrock  30 


ChungReynolds  30 


Step  30 


Alpine1  30 


SumSquares  30 


Sphere  30 


Bohachevsky3  2 


Bohachevsky2  2 


Bartels Conn  2 


GoldsteinPrice 

2 


Matyas  2 

For each test function, the population size and the number of generations were chosen based loosely on the problem size (number of variables) and the problem difficulty. No systematic tuning of the population size (PopSize) or the number of generations (Gens) was attempted; the values used in this study were found to be reasonably good across a majority of the problems after a few initial trials. Two PopSizeGens combinations were used for each function (see Table 2). For = 30, population sizes of 100 and 150 were used, with the corresponding number of generations being 3000 and 5000. For = 2, the population sizes were 15 and 20, with 5000 generations used for both. Thirty independent runs of each of the two algorithms were executed for each PopSizeGens combination on each of the test functions. A run is considered a success if it manages to produce at least one solution with a fitness within a distance of 1.0e6 from the true (known) global optimum, and the number of fitness evaluations corresponding to the first appearance of such a solution is recorded as the FirstHitEvals of that run.
Tables 2 and 3 show the results of SJaya and Jaya, respectively, on the 12function testsuite. In all the tables in this paper results are rounded at the fourth decimal place.
Function  PopSize  Gens  Bestofrun Fitness  FirstHitEvals  
Best  Mean  Std Dev  Success  Best  Mean  Std Dev  
Ackley  100  3000  7.4347e10  1.8090e09  9.1920e10  30  209499  217209.4333  4885.3830 
150  5000  1.0938e12  2.7097e12  7.9283e13  30  407146  426516.7667  7522.8400  
Rosenb  100  3000  0.0015  25.4532  28.8764  0  —  —  — 
150  5000  0.0001  17.0565  26.9145  0  —  —  —  
ChuRey  100  3000  5.0261e37  1.1798e35  3.0313e35  30  77035  84420.6  3325.8495 
150  5000  1.2691e48  4.9288e47  6.4529e47  30  153594  162497.0667  3651.8492  
Step  100  3000  0.0  0.0667  0.2494  28  39004  43895.0357  5319.6538 
150  5000  0.0  0.0  0.0  30  68099  73154.9333  3639.5311  
Alp1  100  3000  0.0247  6.8245  6.4345  0  —  —  — 
150  5000  0.0137  4.5976  5.7499  0  —  —  —  
F2Rao  100  3000  6.1724e18  3.8440e17  4.0234e17  30  138646  144029.4333  3771.9204 
150  5000  1.3309e23  7.2599e23  5.5164e23  30  266653  280539.4333  6344.5950  
Sphere  100  3000  5.6616e17  2.9297e16  2.6115e16  30  152133  157149.2333  2954.1983 
150  5000  1.3981e22  6.1597e22  4.1632e22  30  298554  306880.0667  4927.0814  
Boha3  15  5000  0.0  0.0  0.0  30  882  1322.4667  308.4498 
20  5000  0.0  0.0  0.0  30  1182  1838.7  333.6645  
Boha2  15  5000  0.0  0.0  0.0  30  718  1005.3333  268.2153 
20  5000  0.0  0.0  0.0  30  890  1443.3667  222.3957  
Bartel  15  5000  1.0  1.0  0.0  30  893  1061.0  90.1706 
20  5000  1.0  1.0  0.0  30  1128  1523.4333  124.4451  
GoldP  15  5000  3.0000  3.0000  1.0820e05  6  28320  55587.5  14917.3860 
20  5000  3.0000  3.0000  1.8986e05  5  58442  82977.0  14243.2530  
Matyas  15  5000  0.0  3.0482e35  1.6415e34  30  471  856.1  169.1497 
20  5000  0.0  5.6005e123  3.0160e122  30  692  1152.7333  264.9280  
Function  PopSize  Gens  Bestofrun Fitness  FirstHitEvals  
Best  Mean  Std Dev  Success  Best  Mean  Std Dev  
Ackley  100  3000  4.2232e06  7.6506e06  1.9595e06  0  —  —  — 
150  5000  3.9148e08  8.2624e08  2.5913e08  30  620422  651813.4333  11801.5819  
Rosenb  100  3000  0.0310  26.8113  27.5200  0  —  —  — 
150  5000  0.0521  37.0939  32.6063  0  —  —  —  
ChuRey  100  3000  6.1251e23  2.2695e21  2.7432e21  30  122216  130083.4667  3283.9261 
150  5000  1.1798e30  1.1626e29  1.2429e29  30  230733  245191.6  6139.8179  
Step  100  3000  0.0  0.0  0.0  30  82115  88940.6  4467.5422 
150  5000  0.0  0.0  0.0  30  154374  166652.7667  6105.3915  
Alp1  100  3000  0.0240  9.7502  5.6913  0  —  —  — 
150  5000  0.0381  6.2610  5.6690  0  —  —  —  
F2Rao  100  3000  1.5297e10  4.5700e10  2.2292e10  30  213427  222775.1667  3910.2976 
150  5000  4.9038e15  3.7103e14  1.7512e14  30  406918  421195.1  7055.2581  
Sphere  100  3000  1.2410e09  4.6650e09  2.4779e09  30  231137  245599.1667  4874.0277 
150  5000  8.6939e14  3.6152e13  2.3875e13  30  441574  464684.3667  10701.7923  
Boha3  15  5000  0.0  0.0301  0.1624  29  947  1368.5517  257.8614 
20  5000  0.0  0.0  0.0  30  1461  1877.5333  275.5259  
Boha2  15  5000  0.0  0.0347  0.1866  29  809  1102.7931  158.0520 
20  5000  0.0  0.0  0.0  30  1160  1590.8667  243.5768  
Bartel  15  5000  1.0  1.0  0.0  30  995  1238.7667  91.8632 
20  5000  1.0  1.0  0.0  30  1375  1684.0667  152.4998  
GoldP  15  5000  3.0000  3.0000  1.4203e05  5  36981  57683.4  12921.8507 
20  5000  3.0000  3.0000  1.7344e05  3  37550  52030.0  15017.5414  
Matyas  15  5000  0.0  1.6173e11  8.7092e11  30  572  906.9667  261.1821 
20  5000  0.0  1.9566e55  1.0537e54  30  761  1286.0  264.6156  
From Tables 2 and 3 we see that SJaya produces superior results than Jaya on all the metrics. Specifically,

On the best of bestofruns metric, out of 24 cases, SJaya outperforms Jaya in 12 cases and is outperformed by Jaya in 2 cases, with 10 cases resulting in ties. In a few cases (such as the values of 3.0000 of the best of bestofrun fitnesses and of the mean of bestofrun fitnesses corresponding to the GoldsteinPrice function for both SJaya and Jaya), differences exist at the fifth or a later decimal position but do not show in Tables 2 and 3.

On the mean of bestofruns metric, SJaya is the winner with winlosstie figures of 1815.

The success counts are higher (5118) for SJaya.

SJaya outperforms Jaya 1914 on the best FirstHitEvals metric.

On the mean FirstHitEvals metric, SJaya outperforms Jaya 1914.
Table 4 presents the scores and onetailed values from SmithSatterthwaite tests (Welch’s tests) [10]
(corresponding to unequal population variances) run on the data in Tables
2 and 3for examining whether or not the difference between the means of Jaya and SJaya (for the bestofrun fitnesses metric and, separately, for the FirstHitEvals metric) is significant. Using the subscripts 1 and 2 for Jaya and SJaya respectively, we obtain the test statistic as a
score given byand the degrees of freedom of the
distribution (this distribution is used to approximate the sampling distribution of the difference between the two means) aswhere the symbols , and represent mean, standard deviation and sample size, respectively. Note that even though 30 runs were executed in each case, the sample sizes are not always 30 (because not all runs were successful in all cases); for instance, for the GoldsteinPrice function (executed with parameters PopSize = 15 and Gens = 5000), = = 30 for the mean bestofrun fitness calculation, whereas = 5 and = 6 for the mean FirstHitEvals computation. (To avoid division by zero, we cannot use the above formulas when both and are zeros or when any one of and is unity.)
Using = 0.05 as the level of significance, we see from the results in Table 4 that on the bestofrun metric, out of a total of 19 cases, ten cases produce a positive statistic that corresponds to a onetailed value less than (the values were obtained with tests from scipy.stats
). Thus the null hypothesis
must be rejected in favor of for those ten cases. The 19 cases include a lone negative score, but the corresponding value is greater than 0.05. On the FirstHitEvals metric, we have a total of 19 cases (the two occurrences of 19 between bestofrun and FirstHitEvals is a coincidence), of which fourteen have a positive with a value less than 0.05, and a single case has a negative score with a lessthan0.05 value.Function  PopSize  Gens  Bestofrun Fitness  FirstHitEvals  
statistic  value  statistic  value  
Ackley  100  3000  21.3800  1.3355e19  —  — 
150  5000  17.4636  3.1280e17  88.1720  3.7508e56  
Rosenb  100  3000  0.1865  0.4264  —  — 
150  5000  2.5958  0.0060  —  —  
ChuRey  100  3000  4.5314  4.6542e05  53.5110  2.3156e51 
150  5000  5.1236  8.9954e06  63.4031  1.1548e47  
Step  100  3000  1.4639  0.0770  34.7952  1.9600e38 
150  5000  —  —  72.0480  2.6003e50  
Alp1  100  3000  1.8655  0.0336  —  — 
150  5000  1.1283  0.1319  —  —  
F2Rao  100  3000  11.2285  2.2360e12  79.3863  4.1571e61 
150  5000  11.6045  1.0180e12  81.1938  3.3244e61  
Sphere  100  3000  10.3116  1.6374e11  85.0016  4.2333e54 
150  5000  8.2938  1.9158e09  73.3631  3.1842e45  
Boha3  15  5000  1.0171  0.1588  0.6234  0.2678 
20  5000  —  —  0.4915  0.3125  
Boha2  15  5000  1.0171  0.1588  1.7071  0.0472 
20  5000  —  —  2.4494  0.0087  
Bartel  15  5000  —  —  7.5641  1.6549e10 
20  5000  —  —  4.4699  1.9412e05  
GoldP  15  5000  1.0676  0.1452  0.2496  0.4042 
20  5000  0.7407  0.2309  2.8765  0.0217  
Matyas  15  5000  1.0171  0.1588  0.8954  0.1875 
20  5000  1.0171  0.1588  1.9494  0.0280  
The statistical tests in Table 4 provide performance comparison separately on each of the twelve functions (using two different algorithm parameter settings for each function). A measure of the combined performance on the 12 functions taken together can be obtained using a pairedsample Wilcoxon signed rank test on the 12function suite. The results of this test for each of the two metrics are presented in Table 5 where the null hypothesis is that the Jaya mean and the SJaya mean are identical and the alternate hypothesis is that the former is larger than the latter. The second column in Table 5 shows the number of zero differences between SJaya and Jaya; represents the effective number of samples obtained by ignoring the samples, if any, corresponding to zero differences (e.g., is for the mean of bestofrun fitness metric); is the test statistic obtained as the minimum of and ; represents the level of significance (a value of 0.05 is used here); and the critical for a given and for = 0.05 is obtained from standard statistical tables. The statistic is seen to be less than the critical . Arguing that the sample size is large enough for the discrete distribution of the statistic to be approximated by a continuous distribution, we obtain the mean of as
and its standard deviation as
and, under the normal distribution assumption, the
statistic is obtained fromThe onetailed value corresponding to the above statistic is obtained from standard tables of the normal distribution.
From the results in Tables 4 and 5 we conclude that at the 5% significance level, SJaya is better than Jaya on the benchmark testset.
Metric  #zero diff.  Critical  Mean of  Std. Dev. of  statistic  left tail  

Mean of BestofRun Fitnesses  5  19  175  15  15  0.05  53  95  24.8495  3.2194  0.0006 
Mean of FirstHitEvals  0  19  180  10  10  0.05  53  95  24.8495  3.4206  0.0003 
4.2 Results on fuel cell stack design optimization
A proton exchange membrane fuel cell (PEMFC) [11, 17] stack design optimization problem [15, 5, 1] is considered here. This problem has been investigated in the fuel cell literature as a problem of practical importance for which the global minimum is believed to be mathematically intractable [5]. This is a constrained optimization problem where the task is to minimize the cost of building a PEMFC stack that meets specific requirements. The objective (cost) function is a function of three variables :
where is the number of cells connected in series in each group; is the number of groups connected in parallel; is the cell area; is the rated (given) terminal voltage of the stack; represents the output voltage at the maximum power point of the stack; is the rated (given) output power of the stack; is the maximum output power of the stack; are predetermined constants [5] used to adjust the relative importance of the different components of the cost function; and represents a penalty term given by
and are obtained numerically from the following equation by iterating over the load current (power is voltage times current):
where is the stack voltage, is the Nernst e.m.f., and are constants known from electrochemistry, is the areaspecific resistance, and the ’s represent different types of current densities (the subscript d is used to indicate density) in the cell [11, 4]. The numerical values of the parameters are provided in Table 7.
Variable  Lower bound  Upper bound 

1  50  
1  50  
(cm)  10  400 
Parameter  Value 

12 V  
200 W  
0.5  
10  
0.001  
200  
98.010 cm  
129 mA/cm  
0.21 mA/cm  
1.26 mA/cm  
0.05 V  
0.08 V  
1.04 V 
Tables 8 and 9 present results of the two algorithms on the fuel cell problem; 30 independent runs are executed for each of 13 PopSizeGens combinations for either algorithm. For this problem, the success of a run is defined as the production of at least one solution with a fitness of 13.62 or lower [5]. For 12 of the 13 cases in Table 8, the mean of the bestofrun costs is better for SJaya than for Jaya. And, on the mean FirstHitEvals metric, SJaya outperforms Jaya 10 out of the 13 times. Again, SJaya beats Jaya 931 on the success count metric. Results of SmithSatterthwaite tests (Table 10) show that for the bestofrun cost metric, the statistic is positive in all cases but one, but the onetailed values are not less than 0.05. Thus we do not have a strong reason at the 5% significance level to reject the null hypothesis that the two means of the bestofrun costs are equal. For the bestofrun metric, the single negative score in Table 10 corresponds to a value that is close to 0.5, indicating no reason to consider Jaya to be significantly better than SJaya on that case. The FirstHitEvals metric shows SJaya to be significantly better (at the 5% level) in two of the 12 cases, the other cases being ties at that level of significance.
Table 11 shows results of Wilcoxon signedrank tests for the PEMFC problem. For each of the two metrics, the statistic is less than the critical . Moreover, the onetailed value computed from the score is less than 0.05 for both the metrics, thereby establishing a statistically significant (at the 5% level) superiority of SJaya over Jaya on the fuel cell problem.
PopSize  Gens  Bestofrun Fitness  FirstHitEvals  

Best  Mean  Std Dev  Success  Best  Mean  Std Dev  
20  10  13.6162  13.6885  0.0759  3  127  172.0  31.9479 
15  20  13.6161  13.6255  0.0190  21  128  254.8095  47.8187 
20  20  13.6159  13.6376  0.0523  21  127  310.0  71.8338 
20  25  13.6159  13.6302  0.0484  25  127  335.8  89.0222 
25  40  13.6157  13.6164  0.0023  29  89  510.6897  166.4118 
40  25  13.6158  13.6184  0.0044  25  291  654.24  213.3435 
20  100  13.6157  13.6158  8.7813e05  30  127  436.1333  304.5035 
100  20  13.6159  13.6195  0.0029  20  463  1491.5  437.1448 
30  100  13.6157  13.6158  0.0002  30  370  585.4667  230.4605 
100  30  13.6158  13.6179  0.0022  25  463  1675.08  550.3110 
40  100  13.6157  13.6160  0.0006  30  291  778.9333  385.4906 
100  40  13.6157  13.6174  0.0022  26  463  1737.1154  622.4179 
100  100  13.6157  13.6162  0.0010  29  463  2155.3103  1395.2800 
PopSize  Gens  Bestofrun Fitness  FirstHitEvals  

Best  Mean  Std Dev  Success  Best  Mean  Std Dev  
20  10  13.6213  13.7026  0.0713  0  —  —  
15  20  13.6160  13.6374  0.0342  13  124  241.8462  45.9378 
20  20  13.6163  13.6367  0.0483  20  298  363.65  31.7636 
20  25  13.6160  13.6312  0.0463  25  298  382.36  48.3867 
25  40  13.6158  13.6298  0.0520  28  144  540.6071  144.4191 
40  25  13.6158  13.6229  0.0236  26  250  739.5  170.0993 
20  100  13.6157  13.6182  0.0126  29  298  454.6897  236.2226 
100  20  13.6160  13.7947  0.9338  14  907  1595.2857  360.2738 
30  100  13.6157  15.1444  8.2308  29  368  740.6207  546.2285 
100  30  13.6159  13.7910  0.9344  25  907  1922.44  492.2595 
40  100  13.6157  13.6202  0.0237  29  250  787.5517  222.2544 
100  40  13.6157  13.7907  0.9345  26  907  1972.7308  544.2687 
100  100  13.6157  13.7900  0.9346  27  907  2118.4074  914.8884 
PopSize  Gens  Bestofrun Fitness  FirstHitEvals  

statistic  value  statistic  value  
20  10  0.7429  0.2303  —  — 
15  20  1.6673  0.0512  0.7872  0.2191 
20  20  0.0627  0.4751  3.1175  0.0021 
20  25  0.0865  0.4657  2.2976  0.0137 
25  40  1.4068  0.0850  0.7256  0.2356 
40  25  1.0202  0.1578  1.5742  0.0612 
20  100  1.0461  0.1521  0.2620  0.3971 
100  20  1.0279  0.1562  0.7564  0.2276 
30  100  1.0172  0.1587  1.4129  0.0830 
100  30  1.0147  0.1593  1.6751  0.0503 
40  100  0.9838  0.1667  0.1056  0.4582 
100  40  1.0158  0.1591  1.4530  0.0763 
100  100  1.0190  0.1583  0.1178  0.4534 
Metric  #zero diff.  Critical  Mean of  Std. Dev. of  statistic  left tail  

Mean of BestofRun Fitnesses  0  13  90  1  1  0.05  21  45.5  14.3091  3.1099  0.0009 
Mean of FirstHitEvals  0  12  71  7  7  0.05  17  39  12.7475  2.5103  0.0060 
5 Conclusions
This paper presented an improvement to the Jaya algorithm by introducing new update policies in the search process. The usefulness of the present approach is that, unlike most other improvements to Jaya reported in the literature, our strategy does not require the introduction of any additional parameter. It retains both the features that the original Jaya is famous for, namely “parameterlessness” and simplicity, while providing performance that is statistically significantly better (in terms of the solution quality) and/or faster (in terms of the speed of finding a nearoptimal solution) than that produced by Jaya.
References
 [1] (2014) Using qualimetric engineering and extremal analysis to optimize a proton exchange membrane fuel cell stack. Applied Energy 128, pp. 15–26. Cited by: §4.2.
 [2] (2018) Improved teaching–learningbased and jaya optimization algorithms for solving flexible flow shop scheduling problems. Journal of Industrial Engineering International 14 (3), pp. 555–570. Cited by: §2.
 [3] (2012) PEM fuel cell modeling using differential evolution. Energy 40 (1), pp. 387–399. Cited by: §4.
 [4] (2019) A new model for constant fuel utilization and constant fuel flow in fuel cells. Applied Sciences 9 (6), pp. 1066. Cited by: §4.2.
 [5] (2019) Proton exchange membrane fuel cell stack design optimization using an improved jaya algorithm. Energies 12 (16), pp. 3176. Cited by: §4.2, §4.2.

[6]
(1996)
Analysis of selection algorithms: a markov chain approach
. Evolutionary Computation 4 (2), pp. 133–167. Cited by: §3.  [7] (2016) Jaya algorithm for solving urban traffic signal control problem. In 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), pp. 1–6. Cited by: §2.
 [8] (2016) Deep learning. MIT press. Cited by: §1.
 [9] (2017) A prediction modelguided jaya algorithm for the pv system maximum power point tracking. IEEE Transactions on Sustainable Energy 9 (1), pp. 45–55. Cited by: §2.
 [10] (2000) Probability and statistics for engineers. Vol. 2000, Pearson Education London. Cited by: §4.1.
 [11] (2003) Fuel cell systems explained. Vol. 2, J. Wiley Chichester, UK. Cited by: §4.2, §4.2.
 [12] (2017) Application of eoselm with binary jayabased feature selection to realtime transient stability assessment using pmu data. IEEE Access 5, pp. 23092–23101. Cited by: §2.
 [13] (2019) An efficient multicore implementation of the jaya optimisation algorithm. International Journal of Parallel, Emergent and Distributed Systems 34 (3), pp. 288–320. Cited by: §2.
 [14] (2013) Genetic algorithms+ data structures= evolution programs. Springer Science & Business Media. Cited by: §1.
 [15] (2004) Proton exchange membrane (pem) fuel cell stack configuration using genetic algorithms. Journal of Power Sources 131 (12), pp. 142–146. Cited by: §4.2, Table 6.
 [16] (2018) Development of pathological brain detection system using jaya optimized improved extreme learning machine and orthogonal rippletii transform. Multimedia Tools and Applications 77 (17), pp. 22705–22733. Cited by: §2.
 [17] (2016) Fuel cell fundamentals. John Wiley & Sons. Cited by: §4.2.
 [18] (2005) Combining particle swarm optimisation with angle modulation to solve binary problems. In 2005 IEEE Congress on Evolutionary Computation, Vol. 1, pp. 89–96. Cited by: §2.
 [19] (2018) Multiteam perturbation guiding jaya algorithm for optimization of wind farm layout. Applied Soft Computing 71, pp. 800–815. Cited by: §2.
 [20] (2019) Multiobjective optimization of abrasive waterjet machining process using jaya algorithm and promethee method. Journal of Intelligent Manufacturing 30 (5), pp. 2101–2127. Cited by: §2.
 [21] (2017) A selfadaptive multipopulation based jaya algorithm for engineering optimization. Swarm and Evolutionary Computation 37, pp. 1–26. Cited by: §2.
 [22] (2016) Jaya: a simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations 7 (1), pp. 19–34. Cited by: §1.
 [23] (2016) Jaya based anfis for monitoring of two class motor imagery task. IEEE Access 4, pp. 9273–9282. Cited by: §2.
 [24] (1991) A study of reproduction in generational and steadystate genetic algorithms. In Foundations of genetic algorithms, Vol. 1, pp. 94–101. Cited by: §3.
 [25] (1997) No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1 (1), pp. 67–82. Cited by: §1.
Comments
There are no comments yet.